content
stringlengths
275
370k
Carbon 14 dating simple explanation Because atmospheric carbon 14 arises at about the same rate that the atom decays, Earth's levels of carbon 14 have remained fairly constant.Once an organism is dead, however, no new carbon is actively absorbed by its tissues, and its carbon 14 gradually decays.As we mentioned above, the carbon-14 to carbon-12 ratio in the atmosphere remains nearly constant.It’s not absolutely constant due to several variables that affect the levels of cosmic rays reaching the atmosphere, such as the fluctuating strength of the Earth’s magnetic field, solar cycles that influence the amount of cosmic rays entering the solar system, climatic changes, and human activities.Organisms at the base of the food chain that photosynthesize – for example, plants and algae – use the carbon in Earth’s atmosphere.They have the same ratio of carbon-14 to carbon-12 as the atmosphere, and this same ratio is then carried up the food chain all the way to apex predators, like sharks.is a technique used by scientists to learn the ages of biological specimens – for example, wooden archaeological artifacts or ancient human remains – from the distant past.It can be used on objects as old as about 62,000 years. This radioactive form of carbon reacts with oxygen to form carbon dioxide, a gas in our atmosphere. Animals eat plants, and some animals eat other animals, so a very small part of living bodies is made of radioactive C-14.Libby thus reasoned that by measuring carbon 14 levels in the remains of an organism that died long ago, one could estimate the time of its death.This procedure of radiocarbon dating has been widely adopted and is considered accurate enough for practical use to study remains up to 50,000 years old.Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. 1979, 1986 © Harper Collins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012 Cite This Source radiocarbon dating A technique for measuring the age of organic remains based on the rate of decay of carbon 14.Because the ratio of carbon 12 to carbon 14 present in all living organisms is the same, and because the decay rate of carbon 14 is constant, the length of time that has passed since an organism has died can be calculated by comparing the ratio of carbon 12 to carbon 14 in its remains to the known ratio in living organisms. Our Living Language : In the late 1940s, American chemist Willard Libby developed a method for determining when the death of an organism had occurred.
Valley oak, also called California white oak, is native to California, and is most commonly found on deep valley soils with year-round soil moisture. Early settlers used the presence of valley oaks as an indicator of deep rich soil suitable for agriculture. At maturity, valley oak is the largest oak species in North America. Culture for Valley Oak Performs best on deep soils with medium to high organic matter and some source of summer water. Lower branches tend to become quite large and extended, while upper branches typically form an arching growth habit. In marginal site locations, supplemental fertilization and addition of organic matter is recommended. Concerns about Valley Oak Large branch failure due to over-extension and internal decay is common. Susceptible to oak pit scale. Commonly infested with a harmless but eye-catching gall caused by cynipid wasps. Not susceptible to sudden oak death, but may be infected by other Phytophthora species in poorly drained wet soil. Management Practices for Valley Oak Frequent pruning to reduce weight or subordinate extended branches will reduce failure potential. Excellent candidate for root invigoration due to need for rich organic soils. Pit scale treatments are effective but must be accurately timed to impact juvenile crawler stage.
Wetlands are the biological powerhouses of planet Earth, with primary production often higher even than rain forests. These powerhouses produce enormous numbers of wild animals including fish, waterfowl, and aquatic animals. They also produce oxygen and store carbon. These services are provided free, and powered by solar energy. So where do we start our reading to understand them? Here is a guide to your reading, a guide structured by causal factors and their relative importance. Causal factors provide a powerful tool for understanding how wetlands form, why there are different kinds of wetlands, and how they can be wisely managed for production and conservation. a guide to the scientific literature Table of contents: Author’s version of contribution that will be soon available online in Oxford Bibliographies in Environmental Science Suggested citation: Keddy, Paul A. 2016 (forthcoming). Wetlands. Oxford Bibliographies in Environmental Science. Ed. Ellen Wohl. New York: Oxford University Press. Viewed online at www.drpaulkeddy.com, date. Note that crosslinks (indicated by *) do not work in this version, but they are provided on the OBO website. Wetlands have always influenced humans. Early civilizations first arose along the edges of rivers in the fertile soils of floodplains. Wetlands also produce many benefits for humans—along with fertile soils for agriculture, they provide food such as fish and water birds, and, of course, freshwater. Additionally, wetlands have other vital roles that are less obvious. They produce oxygen, store carbon, and process nitrogen. Since wetlands form at the interface of terrestrial and aquatic ecosystems, they possess features of both. They are often overlooked in standard books, since terrestrial ecologists focus on drier habitats, while limnologists focus on deep water. Shallow water, and seasonally flooded areas, fall comfortably into neither category. All wetlands share one causal factor: flooding. While wetlands may be highly variable in appearance and species composition, flooding produces distinctive soil processes and adaptations of the biota. Thus wetlands and water are inseparable. This treatment will first introduce you to some basic overviews that explain what a wetland is, what different kinds of wetlands exist, and some key processes that occur within them (*General guides and Introductions*). Then we will turn to causal factors: flooding creates wetlands, so it receives a full section. Then we will consider how nutrient availability modifies wetlands. *Other Causal Factors*, such as salinity, competition, herbivory, and roads, are combined into a third section. Having provided this foundation, we will look at the global distributions (*Geography of Wetlands*). By this point, you will know what a wetland is, where they occur, and the main factors that affect their abundance and composition. We will then explore two more specialized topics. First, monographs are identified that apply to particular regions of the Earth (*Regional Monographs*). Second, we look at aquatic plants; they are a relatively small group with important implications for the understanding of wetlands as a whole (*Aquatic Plants*). We close with a section on conservation of wetlands. Two general obstacles must be met in coming to grips with the scientific literature in this field. First, much of the work on wetlands is scattered across ecological journals and may not even appear under key word searches for wetland; instead, material may appear under a term such as bog, fen, shoreline, lake, floodplain, pothole, playa, peatland, or mire (or a dozen other terms). Second, this discipline seems to have attracted a large number of conference symposia, the findings of which are recorded often in expensive books with a haphazard collection of papers, written by a haphazard collection of people, with no unifying theme whatsoever except that all deal with wet areas. Hence, the need is pressing for a few general principles to structure one’s knowledge. Here we focus on general causal factors and their relative importance. General Guides and Introductions To learn about wetlands and communicate with other human beings, we need a common frame of reference. Otherwise, our knowledge is more like a heap of bricks than a properly constructed building. Let us begin with three books that provide this common frame of reference. First, Dugan 2005 is a guide that is accessible to the general reader and useful for the professional. The author begins with two basic topics: What are wetlands and why we need wetlands. He then continues with a two-hundred-page survey of the world’s wetlands, supplemented with maps and beautiful illustrations. Next, Wetland Ecology: Principles and Conservation (Keddy 2010) also begins with a general introduction to wetlands. It then proceeds through a series of causal factors that make wetlands, roughly in the order of their importance: flooding, fertility, disturbance, competition, herbivory, and burial. Each of these chapters begins with general principles and then explores experimental and descriptive work that shows how these principles apply to wetlands around the world. Third, Wetlands (Mitsch and Gosselink 2015) also begins with a general introduction to wetlands. However, unlike Dugan 2005 and Keddy 2010, it then divides coverage into five types of wetland ecosystem, with separate chapters on tidal marshes, mangrove swamps, freshwater marshes, freshwater swamps, and peatlands. Whereas Dugan and Keddy emphasize biological diversity, Mitsch and Gosselink tend to emphasize energy flow and biogeochemistry. If you read these three books, you can consider yourself well informed on wetlands as a whole. You can think of these as the trunk upon which many more branches of knowledge are organized. The interested reader can then proceed in two directions. In the first case, one can deepen one’s knowledge of the causal factors that create wetlands and proceed with topics such as *Flooding and Flood Pulses* and *Nutrients*. Or one can focus on the many kinds of wetlands that arise in a local context and proceed with *Regional Monographs*. Finally, with the above sources as a foundation, one can directly consult specialized journals, such as Wetlands, the journal published by the Society of Wetland Scientists since 1981. Otherwise, much of the specialist work on wetland ecology is scattered across journals that deal with ecology and geography. Further, owing to the commercial importance of animals in wetlands (think ducks, muskrats, fish) many papers can be found in fish and wildlife journals, work that is too often marred by an inordinate emphasis upon production of one or a few species of animals. Many wetlands have been damaged in the name of “wildlife management.” Dugan, Patrick, ed. 2005. Guide to wetlands. Keddy, Paul A. 2010. Wetland ecology: Principles and conservation. 2d ed. Mitsch, William J., and James G. Gosselink. 2015. Wetlands. 5th ed. Hoboken, NJ: Wiley. [ISBN: 9781118676820] Wetlands[http://www.springer.com/life+sciences/ecology/journal/13157]. 1981–. [class:periodical] Flooding and Flood Pulses Flooding makes wetlands. This has three main consequences. (1) Flooding causes reduced oxygen levels in the soil. These changes are generally described in Keddy 2010 and Mitsch and Gosselink 2015 (both cited under *General Guides and Introductions*. For more depth and breadth, one can consult Reddy and DeLaune 2008. (2) Plants and animals have to adapt to reduced oxygen levels. The presence of distinctive plants with channels for transmitting oxygen from the atmosphere to the roots (aerenchyma) is a defining characteristic of wetlands. Aquatic plants offer the most extreme case of plants adapted to flooding, and they are therefore further treated in a separate section *Aquatic Plants*. (3) Sometimes the water is higher than other times. High spring flooding creates extensive areas of wetlands along rivers. High spring flooding makes extensive areas of wetlands along the shores of lakes, and high spring flooding makes extensive areas of wetlands in many other kinds of depressions. Keddy 2010 has an entire chapter on this topic, while other monographs, such as Middleton 2002, describe this as “flood pulsing.” An entire literature can now be accessed under “flood pulsing.” It is particularly important for fish (Welcomme 1979). You can say it a hundred times and write books on the topic—yet people will express shock and dismay that their floodplain property is flooded in the spring, and they will equally complain about low water levels that make it inconvenient to use their boat docks. They will also complain when some authority tells them they cannot build a house or factory in a flood-prone area, expecting, of course, that if anything does happen, an insurance company or government will pay for the damage. Yet, so long as snow melts in the spring and rainy seasons arrive, water levels in rivers will be high. A major impact humans have had on wetlands is the systematic disruption of flood peaks in wetlands and watersheds around the world (Nilsson, et al. 2005). Hughes 2003 shows how the restoration of spring floods is necessary for restoring ecological health to wetlands and watersheds. Wilcox, et al. 2007 illustrates the same principle for large lakes. The importance of flood pulsing is now well documented, yet no doubt individuals will continue to think that rivers and lakes should have stable levels so they can build their houses wherever they care—alas, excellent science does not seem to provide an antidote to ignorance. Hughes, Francine M. R., ed. 2003. The flooded forest: Guidance for policy makers and river managers in Middleton, Beth A., ed. 2002. Flood pulsing in wetlands: Restoring the natural hydrological balance. Nilsson, Christer, Catherine A. Reidy, Mats Dynesius, and Carmen Revenga. 2005. Fragmentation and flow regulation of the world’s large river systems. Science 308:405–408. Reddy, K. Ramesh, and Ronald D. DeLaune. 2008. Biogeochemistry of wetlands: Science and applications. Welcomme, Robin L. 1979. Fisheries ecology of floodplain rivers. Wilcox, Douglas A., Todd A. Thompson, Robert K. Booth, and James R. Nicholas. 2007. *Lake-level variability and water availability in the Great Lakes[http://pubs.usgs.gov/circ/2007/1311]*. US Geological Survey Circular 1311. Washington, DC: US Department of the Interior. [class:report] Nutrients and Fertility Two elements, nitrogen and phosphorus, control rates of primary production, and they determine species composition, in wetlands. Alluvial floodplains and deltas have high levels, as nutrients are carried in spring flood waters, and they accumulate in sediment. Here one finds some of the highest rates of primary production in the world, in excess of 1000 gm2yr-1 (Whittaker and Likens 1973). This often translates directly into animals, particularly fish (Welcomme 1979, cited under *Flooding and Flood Pulses*). It is difficult to generalize whether it is nitrogen or phosphorous that limits growth (Verhoeven, et al. 1996). Nutrients are not necessarily beneficial. In shallow water nutrients can generate algal blooms with negative consequences on marsh and aquatic vegetation, while at larger scales, entire lakes or estuaries may become so nutrient enriched that the resulting decay consumes oxygen, producing “dead zones” (Turner and Rabelais 2003). The Davis, Steven M., and John C. Ogden, eds. 1994. Kadlec, Robert H. 2009. The Kadlec, Robert H., and Scott D. Wallace. 2009. Treatment wetlands. 2d ed. Turner, R. Eugene, and Nancy N. Rabelais. 2003. Linking landscape and water quality in the Verhoeven, Jos T. A., Willem Koerselman, and Arthur F. M. Meuleman. 1996. Nitrogen- or phosphorus-limited growth in herbaceous, wet vegetation: Relations with atmospheric inputs and management regimes. Trends in Ecology and Evolution 11.22: 494–497. Whittaker, Robert H., and Gene E. Likens. 1973. Carbon in the biota. In Carbon and the biosphere. Edited by George M. Woodwell and Erene V. Pecan, 281–302. Zampella, Robert A., John F. Bunnell, Kim J. Laidig, and Nicholas A. Procopio. 2006. Using multiple indicators to evaluate the ecological integrity of a coastal plain stream system. Ecological Indicators 6.4: 644–663. Other Casual Factors For each particular wetland, there is a hierarchy of causal factors. The challenge for a scientist or a manager is to identify these causal factors and to determine which ones are the most important at a specific site. In general, it is useful to view the composition of a wetland as arising from these causal factors acting upon the pool of species available in the landscape (Weiher and Keddy 1999). Two factors of overriding importance, flooding and nutrients, have already been discussed. Both of these are partially controlled by the geological setting, which acts as a templet for most wetlands (Warner 2004). Superimposed on these foundations is a long list of other factors. The chapters in Keddy 2010 (cited under *General Guides and Introductions*) are organized in approximate order of their importance: flooding, fertility, disturbance, competition, herbivory, and burial and other factors. Here we will consider just four beyond flooding and fertility: (1) salinity, (2) herbivores, (3) fire, and (4) roads. Salinity is a very important factor near coastlines, with species and communities arranged along salinity gradients created by freshwater inputs (Tiner 2013). (2) Herbivores can have a major impact. The impacts of muskrats in marshes provides a classic case in which high population densities of herbivores can lead to almost total loss of aboveground vegetation (Keddy 2010). Such top-down effects are becoming better understood; when humans remove the top carnivores (such as crabs or alligators), the effects can be dramatic (Silliman, et al. 2009). (3) Wetlands can burn during periods of drought. The chapter on fire in the Everglades in White 1994 is a classic example; here, fire not only removes plant biomass, but also it can even remove peat, thereby producing new areas of open water during the next wet period. (4) Roads can have a significant effect upon the biota of wetlands in populated regions. Not surprisingly, road density is a rather good surrogate for the overall impacts of humans in the landscape (Houlahan, et al. 2006). For a global context of road impacts, consult Laurence, et al. 2014. The most important point when reading about these other causal factors is to keep them in perspective. In each wetland, some are very important while others are less important. Here is a case where wetland ecology is contingent: it is essential to know not only the important general factors that create a wetland, but also how these are modified by local circumstances and other causal factors. While reading the literature, one should make a concerted effort to rank other causal factors in order of relative importance. Houlahan, Jeff E., Paul A. Keddy, Kristina Makkay, and C. Scott Findlay. 2006. The effects of adjacent land use on wetland plant species richness and community composition. Wetlands 26.1: 79–96. Laurance, William F., Sean Sloan, Christine S. O’Connell, et al. 2014. A global strategy for road building. Nature 513.7517: 229–232. [doi:10.1038/nature13717] Silliman, Brian R., Edwin D. Grosholz, and Mark D. Bertness, eds. 2009. Human impacts on salt marshes: A global perspective. Berkeley: Univ. of Tiner, Ralph W. 2013. Tidal wetlands primer: An introduction to their ecology, natural history, status and conservation. Warner, Barry G. 2004. Geology of Canadian wetlands [http://journals.hil.unb.ca/index.php/gc/article/view/2751/3210]. Geoscience Weiher, Evan, and Paul A. Keddy, eds. 1999. Ecological assembly rules: Perspectives, advances, retreats. White, Peter S. 1994. Synthesis: Vegetation pattern and process in the Geography of Wetlands Another way to approach the topic of wetlands is to ask where they occur in the world, how they appear, and the kinds of creatures that are found there. A survey like this is a challenge since the volume of detail is far greater than any single book can cover. The natural world is indeed fractal. Still, having said this, the best guide for beginners is a small book, Dugan 2005 (cited under *General Guides and Introductions*), which can be paired with the online map at Global Wetlands 1993. Another useful online source is the list of Ramsar designated wetlands (Ramsar 2015). The problem with the latter list is that it is heavily biased toward Dugan, Patrick, ed. 1993. Wetlands in danger: A world conservation atlas. Fraser, Lauchlan H., and Paul A. Keddy, eds. 2005. The world’s largest wetlands: Ecology and conservation. Global Wetlands[http://www.unep-wcmc.org/resources-and-data/global-wetlands]. 1993. Cambridge, UK: United Nations Environment Programme (UNEP), World Conservation Monitoring Centre (WCMC). Gopal, Brij, Jan Kvet, Heinz Löffler, Victor Masing, and Bernard C. Patten. 1990. Definition and classification. In Wetlands and shallow continental water bodies. Vol. 1, Natural and human relationships. Edited by Bernard C. Patten, 9–15. The Hague: SPB Academic Publishing. [ISBN: 9789051030464] Hughes, Ralph H., and Jane S. Hughes. 1992. A directory of African wetlands. Ramsar. [http://www.ramsar.org/sites-countries/the-ramsar-sites] 2015. Gland, Switzerland: Ramsar Convention Secretariat. A knowledge of wetlands should begin with an appreciation of general principles and how key environmental factors structure wetlands overall. Having said that, each general principle needs to be calibrated or refined to each particular environment. Hundreds of different types of wetlands exist, many with local names in different languages (e.g., flark, pan, playa, pocosin, yazoo, etc.). Each of these has a distinctive set of qualities or characteristics created by distinctive combinations of factors, such as climate, bedrock, geological history, and biota. Sometimes one is fortunate to find a monograph that highlights their distinctive features. Because such monographs are so numerous in so many languages, space constraints do not permit providing a list for every part of the world, let alone in other languages. The important point is that such monographs often exist, often written by a local expert. The challenge is to find them. I illustrate here the sort of article you are after by sharing some examples from my ecological region in English that I have found useful. It is up to you to find a similar set for your own ecological region and/or language. Consider it a treasure hunt of sorts. For peat bog ecology in eastern Dansereau, Pierre, and Fernando Segadas-Vianna. 1952. Ecological study of the peat bogs of eastern Odum, William E., and Carole C. McIvor. 1990. Mangroves. In Ecosystems of Peace–Athabasca Delta Project Group. 1972. The Peace–Athabasca Delta Summary Report, 1972. Richardson, Curtis J., ed. 1981. Pocosin wetlands: An integrated analysis of coastal plain freshwater bogs in North Carolina. Smith, Loren M. 2003. Playas of the van der Valk, Arnold G. 1989. Northern prairie wetlands. Ames: Wilcox, Douglas A. 2012. Aquatic plants provide a distinctive and instructive situation for all wetland ecologists. Aquatic plants provide an extreme case: they make up probably just 1 percent of the world’s flora. Most of the world’s 350,000 species of plants simply cannot tolerate continual flooding; even short periods of inundation can kill plants by eliminating the oxygen needed for root respiration. The best introduction to this unusual group of plants remains The Biology of Aquatic Vascular Plants (Sculthorpe 1985). It ranges across anatomy, morphology, growth, dispersal, and ecology, and this volume should be on the shelf of any ecologist who encounters wetlands. Hutchinson 1975, a volume on limnological botany, does not replace Sculthorpe 1976, but does add new examples and context. Moreover, it provides one hundred pages dealing with the distribution of macorphytes in lakes. (It may also amuse you to read a Yale professor complaining in 1975 (p. vii) about the “enormous increase in the price of books.”) For more on the historical foundations of aquatic botany, an 1886 German monograph has now been translated as Schenck 2003. Two refinements to these monographs should be noted. First, with regard to flood tolerance conferred by aerenchyma, good evidence now exists of mass flow through leaves and rhizomes (Dacey 1981). Second, with regard to causal factors for plant distributions, biological factors, such as competition and herbivory, may need greater emphasis (see Keddy 2010, cited under *General Guides and Introductions*). If we are trying to restore wetlands, it is important to know how and why aquatic plants are dispersed and assembled into ecological communities; one recent overview of wetland plant traits and consequences is Pierce 2015. Aquatic plants, then, although they constitute a small group, have much to teach us about wetland plants (and wetlands) as a whole. Indeed, although it may seem somewhat circular, one of the best indicators of a wetland is the presence of wetland plants. This is so central that it is used in both scientific and legal definitions. Even the US Army Corps of Engineers maintains a website with an official list of wetland plants (US Army Corps of Engineers 2015). One challenge in reading the older literature is the many changes in plant names that have occurred over the last century, particularly with recent advances in molecular systematics. Scirpus, or Schoenoplectus? Aster or Symphyotrichum? There is no easy solution, except use of online reference works with contemporary nomenclature. For Dacey, John W. H. 1981. Pressurized ventilation in the yellow waterlily. Ecology 62.5: 1137–1147. Hutchinson, G. Evelyn. 1975. A treatise on limnology. Vol. 3, Limnological botany. Pierce, Gary J. 2015. Wetland mitigation: Planning hydrology, vegetation, and soils for constructed wetlands[http://wetlandtraining.com/wp-content/uploads/2015/07/Wetland-Mitigation-excerpts.pdf]*. Schenck, Heinrich. 2003. The biology of aquatic plants. Translated by Donald H. Les. Sculthorpe, Cyril D. 1985. The biology of aquatic vascular plants. Königstein, Germany: Koeltz Scientific. [ISBN: 9783874292573] US Army Corps of Engineers. 2015. http://rsgisias.crrel.usace.army.mil/NWPL/ The conservation of wetlands requires the intelligent application of a few basic principles. The most important of these is maintaining the appropriate water levels, particularly the within-year and among-year variation in water levels (see *Flooding and Flood Pulses*). Dams and reservoirs upstream of wetlands reduce these flood pulses and cause declines in biodiversity and area of the wetlands. It is important to get the water right (Pierce 2015, cited under *Aquatic Plants*). The next challenge is to maintain water quality. We are in an era of growing eutrophication, driven by the application of nitrogen and phosphorous to farmland, and by inputs of human and animal excrement into watercourses. Hence, in many cases, the conservation challenge is to maintain nutrient levels as low as possible (see *Nutrients*). Once one has appropriate water levels and nutrient regimes, much of the work is completed. Of course, other causal factors affect wetland composition and services, and these will need to be addressed on a case-by-case basis (see *Other Causal Factors*). The next step is to ensure that the wetland is designated for conservation within defined boundaries. This core area must be surrounded by a carefully managed buffer zone and connected to other wetlands by corridors. The framework of core areas, buffer zones, and corridors is described in Noss and Cooperrider 1994. A brief overview with specific reference to wetlands is provided in Keddy 2010 (pp. 403–406, cited under *General Guides and Introductions*). A well-conserved wetland is therefore part of a protected network that is managed with reference to the key factors that control wetland composition and maintain wetland functions. Regular monitoring of selected indicators (McKenzie, et al. 1992) is then needed to ensure that the desired composition and the desired ecological services are maintained through time. When monitoring shows that composition is changing, or that functions are declining, one must identify the correct causal factor and take steps to remediate the problem. This process is sometimes called “adaptive management.” One of the fundamental causes of undesirable changes in area and composition of wetlands (indeed for natural ecosystems overall) is expanding human populations (Foreman 2014). **Biosphere Reserves** illustrates the general challenge of integrating protected areas with surrounding human populations. Foreman, Dave. 2014. Man swarm: How overpopulation is killing the wild world. Albuquerque, NM: Rewilding Institute. [ISBN: 9780986383205] Holling, C. S., ed. 1978. Adaptive environmental assessment and management. McKenzie, Daniel H., D. Eric Hyatt, and V. Janet McDonald. 1992. Ecological indicators. 2 vols. Noss, Reed F., and Allen Y. Cooperrider. 1994. Saving nature’s legacy. Washington, DC: United Nations Educational, Scientific and Cultural Organization (UNESCO). Biosphere reserves[http://www.unesco.org/new/en/natural-sciences/environment/ecological-sciences/biosphere-reserves/]. Paris: UNESCO.
Learn something new every day More Info... by email A plesiosaur is a prehistoric creature from the order Plesiosauria. Although these creatures are often called “dinosaurs” by lay people, they were not, in fact, dinosaurs, and the two groups of animals were very different, with markedly different habitats and lifestyles. The plesiosaur is generally believed to be extinct, although apocryphal examples of living specimens occasionally crop up in the news. The term “plesiosaur” means “almost lizard” in Greek, and these marine reptiles do indeed have several traits more closely associated with land-bound reptiles. It is assumed that they reproduced through eggs, possibly laying them on or near shore, and evidence suggests that these creatures died out during the Cretaceous-Tertiary extinction event, which claimed the lives of numerous and unusual plant and animal species who could not adapt to changing conditions on Earth. Plesiosaurs lived in the ocean, with bodies specially adapted for swimming. The examples we have found in fossil form all have broad bodies, flipper-like limbs, and short tails, and biologists have suggested that they probably moved through the water much like penguins. Both short-necked and long-necked fossilized plesiosaur specimens have been discovered, although the long-necked variety is probably more famous. Some very fine examples of plesiosaur skeletons can be seen in many natural history museums. The fossilized remains of plesiosaurs seem to indicate that the animals probably couldn't swim very rapidly, but they could swim well. Their four limbs would have acted like paddles, making it very easy for the creatures to rapidly swivel their bodies while seeking prey. Evidence suggests that plesiosaurs probably drifted below the surface, waiting for unwary victims to pass overhead, and then snapped them up with their extremely powerful jaws. The long-neck plesiosaur is of immense interest to some fans of prehistoric creatures. In many drawings, the animals are depicted with raised necks, suggesting that they had very muscular necks and bodies. Biologists have suggested, however, that their necks may not have been as mobile as people believe; the sheer weight and size of the neck would have made it hard to move. Others point to giraffes, a land-based creature known for having an extraordinary neck, to demonstrate the possible ways in which plesiosaurs might have moved. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
EVALUATION OF MISSISSIPPIAN PERIOD HUNTING PRACTICES IN GEORGIA** Humans have been hunting for hundreds of thousands of years. It is only during the last 10,000 years that agriculture first appeared but even with the domestication of plants hunting for protein has always been desired by groups all over the word. In the Southeastern United States, hunting and plant cultivation co-occurred from the Late Archaic (2500-1000 BC) period onwards. It is not until the Mississippian period (AD 1000-1530) that maize agriculture comes to dominate the diet. It is during this time that hunting supplements the diet of Mississippian peoples. This research will examine how hunting was integrated into the diet of these peoples, given the reliance of maize. Two prehistoric sites will be utilized in the research: The Upper Dabbs Site located in Cartersville, Georgia and the Traversant Site located in Pine Mountain, Georgia. Each site offers a different environmental and ecological habitat for Mississippian peoples. The Traversant Site is situated within the Chattahoochee River Valley below the Fall Line and the Upper Dabbs Site is situated within the Etowah River Valley above the Fall Line. While each region may contain similar animal resources their acquisition may differ due to a number of factors, from environmental and ecological to hunting strategies. This research seeks to understand what prey were available to prehistoric hunters at Traversant and Upper Dabbs and the methods and techniques they used to acquire protein for their diet. A thorough study of catchment areas, topographic features, ecological zones, different types and sizes of prey, and hunting equipment will be examined. Hunting equipment included will be a bow and arrow, knife or projectile point to copy the same methods of using weapons to hunt prey. The research hopes to assess the successfulness of hunters during the Mississippian period. The research is not completed until May. Long*, Bryant C. "EVALUATION OF MISSISSIPPIAN PERIOD HUNTING PRACTICES IN GEORGIA**," Georgia Journal of Science, Vol. 77, No. 1, Article 54. Available at: https://digitalcommons.gaacademy.org/gjs/vol77/iss1/54
Preserved Stone Age impressions were made about 20,000 years earlier than thought Human footprints found in Romania’s Ciur-Izbuc Cave represent the oldest such impressions in Europe, and perhaps the world, researchers say. About 400 footprints were first discovered in the cave in 1965. Scientists initially attributed the impressions to a man, woman and child who lived 10,000 to 15,000 years ago. But radiocarbon measurements of two cave bear bones excavated just below the footprints now indicate that Homo sapiens made these tracks around 36,500 years ago, say anthropologist David Webb of Kutztown University in Pennsylvania and his colleagues. Analyses of 51 footprints that remain — cave explorers and tourists have destroyed the rest — indicate that six or seven individuals, including at least one child, entered the cave after a flood had coated its floor with sandy mud, the researchers report July 7 in the American Journal of Physical Anthropology
What is Tutoring? Tutoring is an age-old practice. The dictionary definition describes a tutor as a person who gives individual, or in some cases small group, instruction. The purpose of tutoring is to help students help themselves, or to assist or guide them to the point at which they become an independent learner, and thus no longer need a tutor. The role of the tutor is diverse. Content knowledge is an essential ingredient for a tutor; however, to be truly effective, a tutor must combine content knowledge with empathy, honesty and humor. Empathy requires a tutor to "read" the emotional states, attitudes and perceptions of their students. Empathy is the ability to see others from their personal frame of reference, and to communicate this understanding to the person involved. In order for tutors to establish a supportive relationship with their students, tutors must be open and honest. Students are often reluctant to talk with a stranger about their academic problems. If a tutor is perceived as genuine and having a strong desire to listen, students will be more willing to open up and discuss their problems. Humor can also play an important part in a tutoring session. Humor can reduce tension. Shared laughter is a powerful way to reinforce learning. Humor can set students at ease and increase rapport. Humor can also be used to compliment, to guide or to provide negative feedback in a positive manner. In addition, a successful tutor demonstates a caring attitude. Caring consists of being organized for the tutoring session, being punctual, establishing a learning relationship with the student, developing unique tutoring strategies, and becoming familiar with the learning process. Tutoring is sharing yourself with another student in a way that makes a difference in both your lives. There are many benefits to tutoring. - Heightens sense of competency/adequacy in conforming to new role. - Encourages higher levels of thinking. - Permits more advanced students to study below-level material without embarrassment. - Increases motivation to learn in order to maintain new role. - Increases ability to manage own learning and study strategies. - Increases subject specific knowledge. - Increases related general knowledge. - Increases understanding of subject area. - Improves attitude toward subject area. - Provides more empathy with students. There are also many benefits to the students who receive tutoring. - Offers more individualized, systematic, structured learning experience. - Provides greater congruence between teacher and learner, closer role model. - Improves academic performance and personal growth. - Improves attitude toward subject area. - Generates stronger effects than other individualized teaching strategies. - Motivates self-paced and self-directed learning. - Provides intensive practice for students who need it. - Improves self esteem. There are many benefits to the college. - Increases opportunity to reinforce instruction. - Increases positive student interaction. - Enhances measurable positive changes in attitude towards teaching/learning for the participants. - Improves educational climate. - Facilitates ethnic and racial integration. Characteristics of Good Tutors Intelligence alone does not indicate success as a tutor; but what kind of person, what kind of student you are does. It takes a certain kind of person to be a good tutor. Some of the characteristics noticeable in good tutors are: - A positive outlook: The belief that things can be changed through action. - A desire to help others: The willingness to become involved with people at first hand and in depth. - Empathy: The ability to feel what another person is feeling. - An even disposition: Patience, gentleness, understanding and fairness. - An open mind: A willingness to accept other people and their point of view. - Initiative: The ability to see what needs to be done and to do something about it. - Enthusiasm: A liking for your subject, and a wish to share it with others. - Reliability as a worker: Punctual, dependable, steady. Summary of What Students Need: - Positive expectations - Mutual respect - Acceptance that everyone makes mistakes - Effective communication - Applications/reasons for learning - Connections between new material and prior knowledge - "The Big Picture" - The language of the discipline - Thinking or wait time before answering - Separation of relevant from irrelevant information - Techniques for: time management, test taking, relaxing, studying, notetaking, organizing, representing and remembering concepts and their relationships. Five Steps to Being an Effective Tutor Step One: Know what is Expected of You as a Tutor Tutoring is the process of getting students to become independent through questioning. Tutoring should help students develop self-confidence and improve study skills. In addition, the tutoring session should provide students with an opportunity to speak up and ask questions, an opportunity sometimes unavailable or missed in a regular classroom situation. Tutoring is a well-balanced question/information exchange in which both parties participate and, therefore, both benefit. Tutoring provides the practice and drill in specific course material needed by the student, while giving the tutor valuable review opportunities and the chance to develop and sharpen educational and communication skills. Tutoring is not teaching. There are important differences between the role of the tutor and that of the classroom teacher. Approaches, relationships, and techniques are different. The tutor works in very close proximity with the student, usually one-on-one. The student may not be accustomed to the close contact and the interchange that occurs during a tutoring session. The tutor may have to consciously strive to develop a good rapport with the student within this environment. Step Two: Setting Up the Tutoring Session It is important to shape the tutoring environment. This can be difficult in the busy LRC; however, if you follow these simple procedures, you will have a successful session. - Prepare yourself for the tutoring session - Prepare a greeting and review expectations - Be prepared for potential problems Step Three: Meeting Your Student's Needs Assess the student’s understanding of the subject by asking questions. Determine the student’s need for them to succeed in the subject. Strategies will vary, but do remember to engage the student. Try not to lecture and attempt to use: Step Four: The Ingredients of a Good Tutoring Session The following are some of the necessary ingredients for a good session: - Greet your Student and give them your undivided attention - Have empathy with your Student's problems - Be honest with your client - Set the Agenda - Have a sense of humor - Have the ability to "lighten up" a situation - Have a good interaction with your Student, a good give-and-take - Know your Student's strengths and weaknesses - Work through your Student's strengths to improve his/her weaknesses - Make your Student feel good about him/herself and his/her accomplishments - Know when to stop a session - End the session on a positive note Step Five: Ending the Tutoring Session Do not just say "good-bye" when the session is over. You should: - Positively assess the work that was done during the session - Re-schedule for another session if necessary - Do any necessary tutor paperwork - Always end the session with a positive comment Techniques that Work It has been estimated that it takes only three or four minutes for the average person to form a positive or negative first impression. What does this mean to a tutor? Make that first meeting with your tutee a positive experience. Be consistent in body, voice and words. Initiate eye contact. Listen with your body by smiling and nodding your head. Nonverbal messages are the most powerful form of communication. Take the communication skills test and find out how your communication skills rank. Establishing rapport with your tutee is very important. You can help create a good rapport by listening patiently and remaining open to what the tutee has to say. It is also important to know why the student has requested tutoring. Some students know exactly where they are having trouble. Some students point out general areas of difficulty. Some students can only vaguely describe the source of their confusion. To help these students, simply ask them where they are having problems. It could be that they fear the subject because of past failure. It could be that they are taking the class because it is a requirement; therefore they have no interest in the subject. The students could also be lacking confidence in their ability to master the material, or they could be overwhelmed by the time requirements imposed on them for this particular class. The reason for the tutoring request is important because it will give you a focus to plan your future tutoring Another approach to finding out why the student is seeking assistance is to review the course materials with the student. Use the course outline, text, or assignments to figure out precisely where the student is having problems. Ask questions that encourage students to state what they know about the material. A technique critical to a successful tutoring session is the ability to ask the right question. There are many types of questions that a tutor can use in a tutoring session. Good questioning techniques are essential to a successful tutoring session. It is important to use the right words. Try asking "What do you understand?" If you ask students what they don't understand, they will be clueless. Another important aspect of asking questions is waiting for an answer. Many tutors are too quick to answer their own questions. Give students an opportunity to reflect on the question before they volunteer a response. Always wait at least 20 seconds for the student to answer your question. This "wait time" might be uncomfortable at first, but it can greatly improve the tutoring session. Remember to ask leading questions. Questions that can be answered with yes/no have less value that those that ask the student to demonstrate understanding. "What if" questions and analogies are excellent strategies for expanding student understanding. Become familiar with the Socratic Method of teaching. It is the oldest, but still the most powerful teaching tactic for fostering critical thinking. Tutors can perform a valuable service when they assist students to figure out answers by themselves. There are three steps that can help you provide this service: Provide instruction, require a response, and give feedback. In other words, present the information briefly, have the student respond and talk about the material, let the student know when the answer is correct or incorrect. Learning to handle right and wrong answers is a vital part of tutoring. In addition, you might want to look at some tips on how to motivate your students to learn. The two most important factors that lead to student success are a strong motivation to succeed and good learning skills. What is Listening? Which activity involves the most listening? Students spend 20 percent of all school related hours listening. If television watching and one-half of conversations are included, students spend approximately 50 percent of their waking hours listening. For those hours spent in the classroom, the amount of listening time can increase to almost 100 percent. Look at your own activities, especially those related to college. Are most of your activities focused around listening, especially in the classroom? How well do you really listen? Take this test to find out. If you ask a group of students to give a one word description of listening, some would say hearing; however, hearing is physical. Listening is following and understanding the sound; in other words, listening is hearing with a purpose. Good listening is built on three basic skills: attitude, attention, and adjustment. These skills are known collectively as triple-A listening. Listening is the absorption of the meanings of words and sentences by the brain. Listening leads to the understanding of facts and ideas. Listening also takes attention, or sticking to the task at hand in spite of distractions. It requires concentration, which is the focusing of your thoughts upon one particular problem. One who incorporates listening with concentration is actively listening. Active listening is a method of responding to another that encourages communication. Listening is a very important skill, especially for tutors. Many tutors tend to talk too much during a tutorial session. This defeats the purpose of tutoring, which is to allow students to learn by discussion. Rather than turning the session into a mini-lecture, tutors must actively listen and encourage their students to become active learners. Giving a student your full attention is sometimes difficult because you start to run out of time, or you find yourself thinking about your next question; however, the time you spend actively listening to your student will result in a quality tutoring session. Look at these sites for improving your listening skills: Remember it is important for you to encourage your students to practice good listening skills. One way to accomplish this task is by sharing with them this simple mnemonic device on how to learn to listen. Active listening is a very demanding skill that requires practice and perseverance; however, active listening is also very rewarding Good study skills are essential for good students. Since you are all good students, it is assumed you have good study skills. Do you? Take this simple test to determine the quality of your study skills. You probably already know your strengths and weaknesses; however, this test will remind you of possible areas that could be improved upon. Tutors are students who are successful learners. They have learned how to learn. As a tutor, it is your responsibility to communicate the principles of effective learning to your students. The majority of the students who seek tutoring do not have good study skills. If these students took a course on study skills, it would probably cover a broad spectrum of subjects, including: time management skills, memory skills, test taking skills, listening skills, note taking skills, textbook reading skills, and learning styles information. Encouraging students to develop good study skills requires you to assess the areas in which students need help. Usually, students will not be able to accurately identify the areas in which they need help. For instance, students who are always late for classes might need some time management techniques. You might ask them to describe the activities in their average day. Some students have never learned how to take notes in class. You might ask the students you tutor if you can look at their notes. If you see that they do not know how to take notes, you could recommend that they look over the different note taking systems. Or, you can ask them to tell you how they prepare to take notes for a lecture class. If they are lacking positive note taking skills, you might share with them some ideas on how to take good notes. Many students do not realize that there are techniques for taking a test, or more importantly, how to overcome test anxiety. You can give the student ideas on how to prepare for tests. It is always helpful and beneficial to go over returned tests with your students. As a tutor, you are a resource for your students. If you find that some of your students have poor reading skills, you can provide them with handouts that could improve their reading skills. Tutors can also teach students how to use memorization techniques. Improved reading memorization skills can be a big step toward being a successful student. Students seeking tutoring have often experienced poor grades; this could be the direct result of their personal study habits. You can provide a valuable service to them by giving them direction and encouragement to develop good study skills. Look at these wonderful sites for valuable information on how to improve your study skills. - The Study Skills Help Page: Strategies for Success - Charles Sturt University, New South Wales-Study Skills, Exam Techniques - Virginia Polytechnic Institue and State University - Study Skills Self-help Information Every student has a unique learning style. According to Jody Whelden, a psychotherapist, counselor and teacher, "Each learning style is like an instrument in an orchestra. Students need to know what instrument is theirs and how they fit into the orchestra." Each student learns differently, at a different rate, using different learning styles. Everyone has a learning style. Our style of learning, if accommodated, can result in improved attitudes toward learning and an increase in academic achievement. By identifying your learning tyle, you will identify how you learn best. Learning styles do not reflect levels of achievement or academic ability and no one style is better than the other. Researchers have done experiments with at least 21 elements of learning style. They have found that most people respond strongly to between six and fourteen elements. The element chart indicates perceptual strengths as being tactile/kinesthetic, visual or auditory learners. Perceptual elements are important to identify because they will identify the learner's preferred learning modality. You have probably noticed that when you attempt to learn something new you prefer to learn by listening to someone talk on the subject. Some students prefer to read about the concept; others want to see a demonstration of the concept. Learning style theory suggests that students learn in different ways and it is good to know your preferred learning style. Learn more about your particular learning style by taking this test. After you take the test, score it and print your score so you can complete your assignment. By becoming familiar with learning style theory, you will be able to recognize your students' style and you will be able to make suggestions on how they can use that strength to help them study. Be sure to look at the suggestions for each learning style, and look at these sites to learn more about learning styles: Have you taken even an introductory course in psychology? If you have, then you have probably taken a personality test. There are several personality models of varying usefulness and accuracy. Carl Jung’s personality theory was originally converted into a practical instrument by Myers and Briggs. Their Myers-Briggs Type Indicator® assessment tool is used extensively in education and career counseling. The personality system used in the Keirsey Sorter is also based on Jung's theory of personality type. Personality type tests attempt to identify a person's personality "type." Personality influences the preferred approaches to acquiring and integrating new information. Take a personality test. The test measures extroversion versus introversion, sensing versus intuition, thinking versus feeling, and judging versus perception. There are 16 different personality types. What is your type? Look at this site for a more detailed description of your type. Knowing your learning style preferences and your personality type can help you plan for activities that take advantage of your natural skills and inclinations. It will help you to be aware of your strengths and weaknesses. It will also help you to capitalize on your strengths and to compensate for your weaknesses, as well as help you become a better tutor. A Learning Disability (LD) is a permanent disorder which affects the manner in which individuals with normal or above average intelligence take in, retain and express information. Like interference on the radio or a fuzzy TV picture, incoming or outgoing information may become scrambled as it travels between the eye, ear or skin, and the brain. This is one definition of a learning disability. Abilities are frequently inconsistent, a student who is highly verbal with an excellent vocabulary has difficulty spelling simple words, a student who learns very well in lecture cannot complete the reading assignments. These striking contrasts in abilities and learning style were evident in many famous individuals. For example, Nelson Rockefeller had dyslexia, a severe reading disability, and yet he was able to give very effective political speeches. Learning disabilities are often confused with other non-visible handicapping conditions like mild forms of mental retardation and emotional disturbances. Persons with learning disabilities often have to deal not only with functional limitations, but also with the frustration of having to "prove" that their invisible disabilities may be as handicapping as paraplegia. Thus, a learning disability does not mean the following: - Mental Retardation: Students who are learning disabled are not mentally retarded. They have average to above average intellectual ability. In fact, it is believed that Albert Einstein and Thomas Edison had learning disabilities. - Emotional Disturbances: Students who are learning disabled do not suffer from primary emotional disturbances such as schizophrenia. The emotional support they need is due to the frustration mentally healthy individuals experience from having a learning disability. - Language Deficiency Attributable to Ethnic Background: Students who have difficulty with English because they come from different language backgrounds are not necessarily learning disabled. Effects of Learning Disabilities on College Students The following are characteristic problems of college students with learning disabilities. Naturally, no student will have all of these problems. - Inability to change from one task to another - No system for organizing notes and other materials - Difficulty scheduling time to complete short and long-term assignments - Difficulty completing tests and in-class assignments without additional time - Difficulty following directions, particularly written directions - Difficulty delaying resolution to a problem - Disorientation in time -- misses class and appointments - Poor self-esteem - Difficulty reading new words, particularly when sound/symbol relationships are inconsistent - Slow reading rate -- takes longer to read a test and other in-class assignments - Poor comprehension and retention of material read - Difficulty interpreting charts, graphs, scientific symbols - Difficulty with complex syntax on objective tests - Problems in organization and sequencing of ideas - Poor sentence structure - Incorrect grammar - Frequent and inconsistent spelling errors - Difficulty taking notes - Poor letter formation, capitalization, spacing, and punctuation - Inadequate strategies for monitoring written work - Difficulty concentrating in lectures, especially two to three hour lectures - Poor vocabulary, difficulty with word retrieval - Problems with grammar - Difficulty with basic math operations - Difficulty with aligning problems, number reversals, confusion of symbols - Poor strategies for monitoring errors - Difficulty with reasoning - Difficulty reading and comprehending word problems - Difficulty with concepts of time and money Developing a Tutoring Program Before determining what to work on with regard to a learning disability in a specific case, both the tutor and the student must understand the student’s specific strengths and areas for improvement. If a student brings up a learning disability or a disability becomes a problem, a few minutes should be spent discussing the student's learning disability, how it may affect him/her in school, and techniques for compensating for it. This is also the time to build trust. This can be accomplished by: - Treating the student as an equal. The student may have a learning disability, but he/she also possesses knowledge and talent that the tutor doesn't have. - Listening to what is important to the student. What areas of learning does he/she want to focus on? - Creating an atmosphere that permits the student to confide in the tutor. It is important to find a location away from peers and teachers, where learning disabled students can feel comfortable to tackle problems without fear of being embarrassed. Final determination of what to work on is based on the following factors: - The nature and severity of the student's learning disability. - The student's concerns. - Course requirements. It may be helpful to list information under each factor and use this information to determine priorities for the tutoring program. Some students may just require assistance with papers and reading assigned in their courses. Others also may want to work on supplementary materials. For example, a student planning to take a statistics course may want to review basic algebra concepts and overcome problems understanding fractions. A student with reading comprehension difficulties may want to focus on ways to improve his/her vocabulary. There is a wealth of information regarding learning disabilities on the Internet. Look at these sites: What is culture? Culture refers to the sum total of acquired values, beliefs, customs, and traditions experienced by a group as familiar and normal. It includes language, religion, customs, and a history of the people. Students today come from a variety of cultural backgrounds. During the 1980's, immigrants accounted for 1/3 of the total U.S. population growth. In 1984, approximately one in four schoolchildren were minority students. By 2020, that figure likely will increase to nearly one in two. During the next 20 years the U.S. population will grow by 42 million. It has also been predicted that Hispanics will account for 47% of the growth, Blacks 22%, Asians 18%, and Whites 13%. As a tutor, you will be working with students from other cultures. You will gain an appreciation for different cultures by providing the student with an atmosphere of trust and acceptance. Encourage the student to talk about his/her family and country. If you are asked about American customs, be sensitive to the tutee's viewpoints. What is socially acceptable in the U.S. might be unthinkable in the student's culture. Most foreign students are eager to talk about their country and traditions. This interaction might be a valuable learning experience for you. Look at these five ways to bring more multicultural awareness to your tutoring Some questions you might want to ask a foreign student include: - Tell me about your travels in other countries and the U.S. - What are your impressions of life in the U.S.? - Why did you decide to come to Anoka-Ramsey Community College? - Have American customs been a problem for you? - What do you miss most about your country? When you begin tutoring a foreign student, be aware that sometimes the student will become dependent on you for more than just tutoring. The student might see you as a much needed new friend, or as a source of information about not only, scholarly interests, but social interests. Student dependence can become an obstacle to bridging the cultural gap. The following are tips for working with English as a Second Language (ESL) students: - Speak clearly, naturally and avoid using slang. - Use repetition. - Frequently ask the student if what you are saying makes sense. - Ask students to become the tutor and explain the concept to you. - Use restatement to clarify the student's response--I think you said... - If the student does not understand you, write down what you are saying. - If you do not understand the student, ask them to write what they are saying. - Encourage students to read and to use their dictionaries. Be sure to look at the following sites. They will give you additional information on multicultural awareness. - Ideas for Working with Students Who Speak English as a Second Language Created in 2007 - Title V Collaborative Grant awarded to John Jay College of Criminal Justice (JJC) and Queensborough Community College (QCC) - Multicultural Pavilion Even though group tutoring is less common in our center than individual tutoring, some tutors encounter small group situations while being a Supplemental Instruction Leader in a classroom setting. Group tutoring is far more challenging; however, it can be very rewarding. The group setting, while manageable by a skilled tutor, is quite limiting in terms of the amount of individual attention that can be provided; this potential problem grows in relation to the size of the group being tutored. Some of the differences are outlined as follows: - Time allows the individual student to ask many questions. - Student is instructed at his/ her level and pace. - Student must actively participate in the session. Content is tailor-made to individual student needs. - Time per student is restricted. - Multiple abilities and background of students complicate level and pace of instruction. - Non-participation by some students can occur. - Content covered must be suitable for the general needs of the group. As you can see, individual tutoring has many natural benefits, while group tutoring requires a more conscious leadership role on the part of the tutor. The primary advantage of group tutoring (and disadvantage of individual tutoring) is the potential for the sharing of a variety of views and information. Groups also demonstrate cooperative attitudes and work skills in contrast to individual tutoring, which is more self-centered by nature. The following are some basic group tutoring guidelines that enhance group learning. Remember that these guidelines (and skills) require conscious leadership on the tutor's part. - Keep in mind, as a group tutor, you are a resource for students and their learning. Your role is to facilitate their learning process. - Stand or sit where all can see and hear you. Arrange seating so it encourages interaction and visibility. - Waiting for students to volunteer a well-developed answer allows high-level thinking to take place. If you are uncomfortable waiting for 30 seconds, join students in looking through notes or text. If students are unable to answer the question, refer to the source of information. - Respect all questions or responses offered by students, no matter how basic. - Remember to use probing questions. - Don't allow individuals to dominate participation or discussion. Try to involve everyone in the learning activity; non-participants must be drawn into the activity. - Please don't interrupt student answers. Group tutors should provide a comfortable environment for practicing. To check for understanding, ask another student to describe the same concept in his or her own words. - Ask open-ended questions, and rephrase questions if they do not yield comments. - Remember to include humor in the group session. - Keep the session on topic and moving at the appropriate pace for the group's abilities. - Maintain productivity of the session by preventing irrelevant arguing or repetition. - As the session comes to a close, provide closure. You can do this by asking the students what they learned during the session, what they still need clarification on, or what they would like to cover in the next session. You might also ask them to come to the next session with a few predictions of test questions. Summarize the ideas presented in the session. All over the world faculty are using technology to break classroom boundaries and improve communication with their students. It is my belief that both teaching and learning will be enhanced with this online class; however, you are the final judge. Did the class fulfill your expectations? Did you learn the fundamentals of good tutoring techniques? Did you enjoy the experience? With these questions in mind, please complete your last assignment and evaluate the class. These evaluations will be shared with future students. Your honest evaluation will be greatly appreciated. Special Thanks goes out to Kathie Read, for the coutesy of the provided information! Kathie Read, Learning Resource Center Coordinator of the American River College, graciously granted her permission for the Learning Center of Los Angeles Mission College to adapt her awesome Online Tutor Training Project for our college.
Nature & Ecosystems Objective: Giving students front-row seats to experience the diversity of the natural world, teaching them the basics of ecosystems. Description: This four-week lesson gives the students front-row seats to experience the diversity of the natural world, teaching them the basics of ecosystems: their structure, major characteristics, and the complex interactions between plants and animals within them. During each week, students explore one of the major types of ecosystems - temperate forest, tropical rainforest, savanna, and desert - and the relevant habitats, learning the names of significant plants and animals and witnessing first-hand the feel of each ecosystem through hands-on demonstrations and exhibits. At the end of each lesson, games and activities reiterate the unique features of each ecosystem so that the students have concrete examples with which to remember them.
Even the most well-meaning field researchers are often guilty of disturbing their study species, since visits to natural habitats involve activity and noise to which the animals would not otherwise be exposed. With a bit of caution, though, the negative impacts of these disturbances can be minimized. In some places, they can even be studied in order to elucidate the long-term impacts of such events. This was the case at a field site in the Gahai Wetland, China, where collaborators from Lanzhou University were studying black redstarts (Phoenicurus ochruros). These birds nest in clay cavities originally dug into the earth by Tibetan ground tits (Pseudopodoces humilis). Although the original excavations include a small nesting chamber at the end of a long tunnel, the redstarts position their nests somewhere along the tunnel. The distance between the mouth of the tunnel and the nest can be used as an indicator of how worried the redstarts are about predation: If the adults feel that they are in danger, they should put nests closer to the exit in order to hear predators, and effect an escape, sooner; if they fear for their young, they should locate the nests further down the tunnel in order to make it difficult for predators to gain access to the chicks. (Gahai Lake, China) The scientists took advantage of these variations in order to investigate whether nesting redstarts responded to the presence of researchers as they would to predators—by shifting the placement of their nests between breeding attempts. This worked because the redstarts have high site fidelity, which means that they return to breed in the same areas in successive years. Thus, the researchers recorded the locations of nests during years when the birds had not been exposed to people during the nesting-building period; if the redstarts found the disturbance from the research project threatening, then when the birds returned to breed during subsequent seasons, they should have positioned their nests differently within the ground tit chambers. (Black redstart, Phoenicurus ochruros) Indeed, the proportion of nests at shallow locations (<20.0 cm from the mouth of the tunnel) decreased considerably after the introduction of researcher disturbance. Similarly, the average absolute value of nest depth increased between undisturbed and disturbed years, from approximately 20 cm to almost 40 cm. The researchers were able to follow six individual females across multiple time points. Each bird responded similarly to exposure to humans: She built her nest at a greater depth the following year. Both of the failed nests recorded during the four-year study were caused by predation, indicating that attacks are a very real threat in this habitat (usually by common ravens, Corvus corax; red-billed choughs, Pyrrhocorax pyrrhocorax; and little owls, Athene noctua). These failures occurred at shallow depths, highlighting the benefit of shifting nests deeper in the chamber in response to nest-threatening disturbance. (Black redstart nest. The birds will occupy any secondary "cavity"--including those that aren't fully enclosed and are made of anthropogenic materials.) Prior to disturbance, mid-depth nests were dominant, which suggests that, in general, adults attempt to balance their safety needs with those of their young. The fact that they move their nests deeper in the cavities in response to anthropogenic activities suggests that redstarts view humans as a threat to their young, rather than to themselves. This response to “nonlethal predation” by humans shows that redstarts, like many other species of bird, are impacted even by innocuous human behaviors. It also indicates that at least some birds use previous experience with people to guide decision-making—in this case, at least a year later. Further work will be needed to determine just how long-term avian memory can be, and whether other species’ behaviors are also guided by recollections of unpleasant anthropogenic encounters. Chen, J.-N., Liu, N.-F., Yan, C., and An, B. 2011. Plasticity in nest site selection of black redstart (Phoenicurus ochruros): a response to human disturbance. Journal of Ornithology 152:603-608. Thanks to the following websites for providing the images used in this post:
<!--intro--><b>From the Chandra X-Ray Observatory:</b> Using the unrivaled high resolution of NASA’s Chandra X-ray Observatory, astronomers have seen important new details in the powerful jet shooting from the quasar 3C273. This research, coupled with optical and radio data, may reveal how these very high velocity jets are driven from the supermassive black holes that scientists believe lurk in the center of quasars. <!--/intro--> <a HREF="http://chandra.harvard.edu/photo/cycle1/0131/0131_xray.jpg"><img SRC="http://chandra.harvard.edu/photo/cycle1/0131/0131_xray_sm.jpg" BORDER=1 align=left></a>“For the first time, Chandra has given us an X-ray view into the area between 3C273’s core and the beginning of the jet,” says MIT’s Herman Marshall, lead author on the paper submitted to Astrophysical Journal Letters. “Instead of being void of X-ray emission, Chandra has enabled us to detect a faint, but definite, stream of energy.” The high-powered jets driven from quasars, often at velocities very close to the speed of light, have long been perplexing for scientists. Instead of seeing a smooth stream of material driven from the core of the quasar, most optical, radio, and earlier X-ray observations have revealed inconsistent, “lumpy” clouds of gas. This newly discovered continuous X-ray flow in 3C273 from the core to the jet may reveal insight on the physical processes that power these jets. Scientists would like to learn why matter is violently ejected from the quasar’s core, then appears to suddenly slow down. “If there is a slower car in front on a highway, a faster one from behind will eventually catch up and maybe cause a wreck,” says Marshall. “If the jet flow velocity changes, then gas shocks may result, which are akin to car collisions. These gigantic clouds of high-energy electrons, now seen in X rays with Chandra, may indeed be the result of some sort of cosmic traffic pile-up.” The X-ray power produced in one of these pile-ups is tremendous. For example, the X-ray output of the first knot in the jet is greater than that of most Seyfert galaxies, which are thought to be powered by supermassive black holes. The abundance of X-ray emission suggests that large amounts of energy may also be produced in gamma rays, a question that researchers are unable to resolve with current telescopes. The energy emitted from the jet in 3C273 probably comes from gas that falls toward a supermassive black hole at the center of the quasar, but is redirected by strong electromagnetic fields into a collimated jet. While the black hole itself is not observed directly, scientists can discern properties of the black hole by studying the jet. The formation of the jet from the matter that falls into the black hole is a process that remains poorly understood. The quasar 3C273 is no stranger to making astronomical news. Discovered in the 1960s, 3C273 was one of the first objects to be recognized a “quasi-stellar” object, due to its incredible optical and radio brightness, but perplexing properties. Only after careful consideration did astronomers determine that 3C273 and others of its ilk were not nearby stars, but instead incredibly powerful objects billions of light years away. The Chandra observation of 3C273 was made with both the Low Energy Transmission Grating (LETG) and the High Energy Transmission Grating (HETG), in conjunction with the High Resolution Camera (HRC) and the Advanced CCD Imaging Spectrometer (ACIS). In addition to Dr. Marshall, the team of researchers includes J.J. Drake, A. Fruscione, J. Grimes, D. Harris, M. Juda, R. Kraft, S.S. Murray, D. Pease, A. Siemiginowska, S. Vrtilek, and B.J. Wargelin (Harvard-Smithsonian Center for Astrophysics), P.M. Ogle (MIT), and S. Mathur (Ohio State University.) The HRC was built for NASA by the Smithsonian Astrophysical Observatory. The HETG and ACIS instruments were built for NASA by the Massachusetts Institute of Technology, Cambridge, MA, and Pennsylvania State University, University Park. The LETG was built by the Space Research Organization of the Netherlands and the Max Plank Institute. NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program. TRW, Inc., Redondo Beach, Calif., is the prime contractor for thew spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, Mass.
The U.S. Department of Agriculture (USDA) and state governments provide inspection and grading. Grade AA and A eggs are defined as eggs that hold their shape well, with tall yolks and thick egg whites. The chalaza is prominent, another sign of freshness. Grade B eggs may have flattened yolks and the white tends to be thinner; typically these eggs are used by food manufacturers, bakers, and institutions. The size of the egg is a reflection of the age, weight, and breed of the hen, with mature hens producing larger eggs. Environmental factors that lower the weight of an egg include heat, stress, overcrowding, and poor nutrition. Specific egg sizes are classified according to weight, expressed in ounces per dozen. Most recipes for baked dishes, such as custards and cakes, are based on the use of “Large” eggs. This term refers to eggs laid by chickens that are permanently caged. Although they are not required to be labeled as such, eggs are from battery-raised hens unless labeling indicates otherwise. Brown vs. white The color of the egg’s shell is a reflection of the breed of hen. Breeds with white feathers and ear lobes, such as White Leghorns, lay white eggs. Those with red feathers or ear lobes lay brown eggs. White eggs are in high demand among most American buyers, but in certain parts of the country, particularly New England, brown shells are preferred. Breeds that lay brown eggs include the Rhode Island Red, New Hampshire, and Plymouth Rock varieties. Duck eggs are larger than those laid by chickens, and have a higher fat content. The white tends to be more gelatinous, and the yolks are a brighter yellow. Physical characteristics of the yolk reflect both the duck’s diet and the egg’s freshness. In some cases the duck egg has a stronger flavor than a chicken’s egg. Scrambled or in omelets, duck eggs are well complemented by onions, peppers, mushrooms, or cheeses. Cooks accustomed to using duck eggs use them much like chicken eggs, taking into account their larger size. Some combine duck and chicken eggs to achieve the consistency they want in particular dishes. Professional bakers are said to prefer duck eggs because of their rich yolks and because the baked goods have better texture and hold their shape better. In Asian cuisine, duck eggs are sometimes pickled or preserved to make what are called “Thousand-Year-Old-Eggs.” Some people who are allergic to chicken eggs are able to tolerate duck eggs. Duck eggs are difficult to obtain and may be available only through specialty shops, Asian grocery stores, or by special order. These eggs are laid by hens regularly exposed to a rooster. Eggs labeled “free range” are laid by uncaged chickens that are permitted to exercise and move about. Under genuine free-range conditions, hens are raised outdoors or have daily access to the outside. Some egg farms are described as indoor-floor operations; in this type of environment, the hens are raised indoors, but have some freedom of movement. The ostrich egg is said to have been a favorite food of Queen Victoria. Each egg contains the equivalent of about two dozen chickens’ eggs. An ostrich egg weighs about 3 pounds (1,360g); it would take roughly 40 minutes to hard-boil an ostrich egg. Gourmets report that quail eggs are among the most delicious in the world. The eggs are small and fine (about 1/5 the weight of a chicken’s egg), with richly speckled shells that range in color from dark brown to blue or white. The nutritional content is comparable to that of chicken eggs, with flavor that is comparable or perhaps more delicate. Quail eggs are associated with gourmet cuisine. Some people who are allergic to chicken eggs find that they can tolerate quail eggs. Copyright © 2016 Healthnotes, Inc. All rights reserved. www.healthnotes.com The information presented in the Food Guide is for informational purposes only and was created by a team of US–registered dietitians and food experts. Consult your doctor, practitioner, and/or pharmacist for any health problem and before using any supplements, making dietary changes, or before making any changes in prescribed medications. Information expires June 2017.
“Those falling leaves drift by your window, those autumn leaves of red and gold….” So goes the old song. Here in Colorado, those autumn leaves are mostly gold: aspens and cottonwoods. For red, maples are the best but there are none of those in our native flora. Nonetheless, few sights equal groves of golden aspens against crisp blue skies in our mountains. In any event, why the beautiful colors as autumn arrives? And, why are those leaves falling? Botanists have studied for years to understand the color changes in trees and shrubs in autumn. It turns out that three factors are involved: pigments in the leaves, the increasing length of the nights, and changes in the weather. Among these, the increasing length of the nights principally regulates both the color change and leaf fall. As days grow shorter and nights grow longer and cooler, biochemical processes begin in leaves that change their colors from green to gold, orange, or red. There are three pigments in leaves: chlorophyll, carotenoids, and anthocyanins. Chlorophyll gives leaves their basic green color and permits photosynthesis, the chemical reaction that enables them to use sunlight to manufacture sugars for their food. Carotenoids produce the yellow and orange colors familiar in flowers, fruits, and vegetables. Anthocyanins produce the red and purple colors of flowers and fruits. Different species have different proportions of carotenoids and anthocyanins, but chlorophyll predominates, and during the growing season, its green masks other pigments. When the season begins to change, chlorophyll production slows and finally stops. Then the carotenoids and anthocyanins are unmasked and the leaves change to characteristic autumn colors. Weather conditions do have an influence. When a succession of warm, sunny days is followed by cool, crisp (but not freezing) nights, we can expect the most spectacular color displays. This year, we experienced more spring rain than in the past several years, but they have had little effect on the fall colors. However, rain (and in the mountains, snow) followed by winds in the early part of the fall this year unfortunately knocked many of the golden leaves from the aspens. Until recently, the days in town were warm enough to keep chlorophyll in the leaves, but a nice change should come soon if freezing nights do not interfere. In the temperate zone, leaves fall from trees in autumn to ensure the continued survival of the trees. Leaves of deciduous trees (those that shed their leaves each autumn) are thin and fragile and contain watery sap that freezes readily. In response to gradually declining intensity of sunlight in early autumn, leaves begin the process leading up to their falling from trees. Veins that carry fluids into and out of the leaves gradually close off as a layer of cells forms at the base of each leaf. Once the separation layer is complete and connecting tissues are sealed off, the leaf will fall. Thus, the tree can endure through the winter, its vital fluids protected within the trunk and branches, and produce new leaves next spring. Of course, not all trees shed their leaves in autumn. Evergreens (pines, spruces, firs) retain their leaves, which we know as needles. Those needles have protective waxy coatings and the fluid inside their cells contains substances that resist freezing. The needle-like leaves of evergreens can withstand all but the most severe of winter conditions. Evergreen needles survive for some years but eventually fall because of old age and are replaced by young ones. Doug Nichols was a Scientist Emeritus with the U.S. Geological Survey and a Research Associate with the Denver Museum of Nature & Science. He was a resident of Berthoud. We mourn Doug’s untimely passing in Jan. 2010.
Use a toothbrush with soft bristles and a small strip of fluoride toothpaste. When you brush your teeth, move the brush in small circular motions to reach food particles that may be under your gum line. Hold the toothbrush at an angle and brush slowly and carefully, covering all areas between teeth and the surface of each tooth. It will take you several minutes to thoroughly brush your teeth. Brush up on the lower teeth, down on the upper teeth and the outside, inside and chewing surface of all of your front and back teeth. Brush your tongue and the roof of your mouth before you rinse. Brush your teeth four times daily to avoid the accumulation of food particles and plaque: - In the morning after breakfast - After lunch or right after school - After dinner - At bedtime As soon as the bristles start to wear down or fray, replace your toothbrush with a new one. Do not swallow any toothpaste; rinse your mouth thoroughly with water after you finish brushing. It is important to carefully floss and brush daily for optimal oral hygiene. For areas between the teeth that a toothbrush can't reach, dental floss is used to remove food particles and plaque. Dental floss is a thin thread of waxed nylon that is used to reach below the gum line and clean between teeth. It is very important to floss between your teeth every day. Pull a small length of floss from the dispenser. Wrap the ends of the floss tightly around your middle fingers. Guide the floss between all teeth to the gum line, pulling out any food particles or plaque. Unwrap clean floss from around your fingers as you go, so that you have used the floss from beginning to end when you finish. Floss behind all of your back teeth. Floss at night to make sure your teeth are squeaky clean before you go to bed. When you first begin flossing, your gums may bleed a little. If the bleeding does not go away after the first few times, let a staff member know at your next appointment. Tooth Decay Prevention Tooth decay is a progressive disease resulting in the interaction of bacteria that naturally occur on the teeth and sugars in the everyday diet. Sugar causes a reaction in the bacteria, causing it to produce acids that break down the mineral in teeth, forming a cavity. Dentists remove the decay and fill the tooth using a variety of fillings, restoring the tooth to a healthy state. Nerve damage can result from severe decay and may require a crown (a crown is like a large filling that can cap a tooth, making it stronger or covering it). Avoiding unnecessary decay simply requires strict adherence to a dental hygiene regimen: brushing and flossing twice a day, regular dental checkups, diet control and fluoride treatment. Practicing good hygiene avoids unhealthy teeth and costly treatment. The grooves and depressions that form the chewing surfaces of the back teeth are extremely difficult (if not impossible) to clean of bacteria and food. As the bacteria reacts with the food, acids form and break down the tooth enamel, causing cavities. Recent studies indicate that 88 percent of total cavities in American school children are caused this way. Tooth sealants protect these susceptible areas by sealing the grooves and depressions, preventing bacteria and food particles from residing in these areas. Sealant material is a resin typically applied to the back teeth, molars and premolars and areas prone to cavities. It lasts for several years but needs to be checked during regular appointments. Fluoride is a substance that helps teeth become stronger and resistant to decay. Regularly drinking water treated with fluoride and brushing and flossing regularly ensures significantly lower cavities. Dentists can evaluate the level of fluoride in a primary drinking water source and recommend fluoride supplements (usually in tablets or drops), if necessary. Sucking is a natural reflex that relaxes and comforts babies and toddlers. Children usually cease thumb sucking when the permanent front teeth are ready to erupt. Typically, children stop between the ages of 2 and 4 years. Thumb sucking that persists beyond the eruption of primary teeth can cause improper growth of the mouth and misalignment of the teeth. If you notice prolonged and/or vigorous thumb sucking behavior in your child, talk to your dentist. Here are some ways to help your child outgrow thumb sucking: - Don't scold a child when they exhibit thumb sucking behavior; instead, praise them when they don't thumb suck. - Focus on eliminating the cause of anxiety—thumb sucking is a comfort device that helps children cope with stress or discomfort. - Praise them when they refrain from the habit during difficult periods. - Place a bandage on the thumb or a sock on their hand at night. Caries, or tooth decay, is a preventable disease. While caries might not endanger your life, they may negatively impact your quality of life. When your teeth and gums are consistently exposed to large amounts of starches and sugars, acids may form that begin to eat away at tooth enamel. Carbohydrate-rich foods such as candy, cookies, soft drinks and even fruit juices leave deposits on your teeth. Those deposits bond with the bacteria that normally survive in your mouth and form plaque. The combination of deposits and plaque forms acids that can damage the mineral structure of teeth, with tooth decay resulting. Your teeth expand and contract in reaction to changes in temperature. Hot and cold food and beverages can cause pain or irritation to people with sensitive teeth. Over time, tooth enamel can be worn down, gums may recede or teeth may develop microscopic cracks, exposing the interior of the tooth and irritating nerve endings. Just breathing cold air can be painful for those with extremely sensitive teeth. Gum, or periodontal, disease can cause inflammation, tooth loss and bone damage. Gum disease begins with a sticky film of bacteria called plaque. Gums in the early stage of disease, or gingivitis, can bleed easily and become red and swollen. As the disease progresses to periodontitis, teeth may fall out or need to be removed by a dentist. Gum disease is highly preventable and can usually be avoided by daily brushing and flossing. One indicator of gum disease is consistent bad breath or a bad taste in the mouth. Bad Breath (Halitosis) Daily brushing and flossing helps to prevent the buildup of food particles, plaque and bacteria in your mouth. Food particles left in the mouth deteriorate and cause bad breath. While certain foods, such as garlic or anchovies, may create temporary bad breath, consistent bad breath may be a sign of gum disease or another dental problem. Canker sores (aphthous ulcers) are small sores inside the mouth that often recur. Generally lasting one or two weeks, the duration of canker sores can be reduced by the use of antimicrobial mouthwashes or topical agents. The canker sore has a white or gray base surrounded by a red border. A bite that does not meet properly (a malocclusion) can be inherited, or some types may be acquired. Some causes of malocclusion include missing or extra teeth, crowded teeth or misaligned jaws. Accidents or developmental issues, such as finger or thumb sucking over an extended period of time, may cause malocclusions. Periodontal simply means "the tissue around the teeth." Periodontists specialize in the treatment and surgery of this area, which is often characterized by gum disease. Plaque is the most common element causing gum disease. Unfortunately, periodontal-related problems are often discovered after they have persisted for an extended period of time. Proper oral hygiene, daily dental care and regular dental checkups will minimize the risk of gum disease. Gum disease ranges from mild (gingivitis) to moderate (periodintitis) to the severe (periodontitis). Treatments are available for every case of gum disease. Common problems associated with gum disease: - "Long" teeth (receding gum lines expose the root portions of your teeth) - Discolored or deteriorating tooth structure - Gum depressions (holes in between the teeth in the gum tissue) - Infected gum line (discoloration or inflammation of the gum tissue) - Tooth loss or tooth movement The effects of gum disease can be damaging to your dental health. However, through proper preventive care and oral hygiene, you can avoid problems associated with gum disease. Please contact our office for a periodontal evaluation.
Antibiotic resistance is going to be one of the most pressing threats in the world. Antibiotic resistance has been linked with overuse of antibiotics in human medicine and in the production of livestock. There have been several reports over the past few years of researchers finding that wild animals are harbouring bacteria that are antibiotic resistant. Resistance is not only occurring in the man made world, but far away from any civilisation. Dr Michelle Power, from Macquarie University, said “It is worrying that we are seeing antibiotic resistance in bacteria of wild animals that have never been treated with antibiotics. Resistance genes from bacteria in humans and domestic animals are being spread through the environment to the naturally occurring bacteria of those wild animals.” Dr Power found antibiotic resistance genes in Australian wildlife, including captive sea lions and rock wallabies, and penguins of Sydney Harbour. She speculated that resistance to antibiotics could be occurring through the use of mobile genetic elements called integrons. Integrons are capable of pass the genes and, thus, the ability to be resistant to some antibiotics to different species. Kathleen Alexander, a disease ecologist at Virginia Tech in the United States researched E. coli, a food related bacteria that can be found in food and intestines of humans and animals. The research studied the spread of antibiotic resistance in humans, domestic animals and wildlife in Chobe National Park and in two village in northern Botswana. It was found that forty-one per cent of the faecal samples that were taken and tested from 18 wild species contained E. coli resistant to one or two of ten antibiotics tested. Water seems to be the confounding factor to the spread of antibiotic resistance as animals associated with water, such as hippos, waterbuck and crocodile, were found to be resistance to more antibiotics than other species tested. Species that were resistance to more than one antibiotic tended to be animals also found in urbanised areas such as baboons. However, the researchers warned that antibiotic resistance could be affecting whole food chains as it accumulates in each food level, peaking in the apex predator. Thus, carnivore species may be severely affected. “Alarmingly, our research identifies widespread resistance in wildlife to several first-line antimicrobials used in human medicine,” Alexander said. “There is a need to be much more aggressive about controlling the spread of antibiotic resistance,” she continued. “We can harness life history diversity in wildlife communities to identify where contact with resistant microbes might occur in the environment.” Howler monkeys, spider monkeys, tapirs, jaguars, a puma, a dwarf leopard, and jaguarundis were found to have bacteria resistant to antibiotics by wildlife biologist Jurgi Cristóbal-Azkarate, accompanied by a team of researchers from Cambridge, the University of Washington, and Fundación Lusara in Mexico City. It is likely that animals are coming into contact with human or animal waste being carried in water systems along with the resistant bacteria. Researchers are concerned that the bacteria they have found that are resistance could mutate further into ‘superbugs’ – bacteria that are “difficult, if not impossible,” to treat, said Tony Goldberg, an epidemiologist at the University of Wisconsin. Resistance is being found everywhere. Even in remote areas of the world, resistance to the newest and most powerful antibiotics that have been made, is being found. Questions need to be asked as to how the wildlife are developing resistance to antibiotics even when there is little contact with civilisation and if the resistance is harming efforts to conserve some species. This research supports the proposal of the One Health Initiative, whereby wildlife, livestock and humans are linked and require synergism between physicians, veterinarians and scientists to overcome challenges in the environment and human civilisation.
Washington, April 22 (IANS) Most of the cosmic rays that we detect at the Earth originated relatively recently in nearby clusters of massive stars, according to the researchers. After analysing results from NASA’s Advanced Composition Explorer (ACE) spacecraft, the research team determined the source of these cosmic rays by observing a very rare type of cosmic ray that acts like a tiny timer, limiting the distance the source can be from the Earth. “Before these observations, we didn’t know if this radiation was created a long time ago and far, far away, or relatively recently and nearby,” said Eric Christian of NASA’s Goddard Space Flight Center in Greenbelt, Maryland. The Earth’s atmosphere and magnetic field shield us from less-energetic cosmic rays which are the most common. However, cosmic rays will present a hazard to unprotected astronauts traveling beyond the Earth’s magnetic field because they can act like microscopic bullets, damaging structures and breaking apart molecules in living cells. NASA is currently researching ways to reduce or mitigate the effects of cosmic radiation to protect astronauts travelling to Mars. The galactic cosmic rays detected by ACE that allowed the team to estimate the age of the cosmic rays, and the distance to their source, contain a radioactive form of iron called Iron-60 (60Fe). Some 60Fe in the debris from the destroyed star is accelerated to cosmic-ray speed when another nearby massive star in the cluster explodes and its shock wave collides with the remnants of the earlier stellar explosion. “Our detection of radioactive cosmic-ray is a smoking gun indicating that there has likely been more than one supernova in the last few million years in our neighbourhood of the Galaxy,” said Robert Binns from Washington University, St. Louis. ACE was launched on August 25, 1997 to a point 900,000 miles away between the Earth and the Sun where it has acted as a sentinel, detecting space radiation from solar storms, the galaxy and beyond. The research was published in the journal Science.
Tetanus, Diphtheria, and Pertussis (Tdap) Adolescent Vaccination Recommendation: All adolescents 11-18 years of age should get a Tdap booster once, followed by the Td booster every 10 years. About tetanus, diphtheria, and pertussis (whooping cough) These three diseases are all caused by bacteria. Diphtheria and pertussis are spread from person to person. Tetanus bacteria live in soil and dirt; the bacteria enter the body through cuts, scratches, or wounds. Tdap vaccine protects against all three. Adolescents who have already received a booster dose of Td are encouraged to get a single dose of Tdap as well, for protection against pertussis. Pertussis, also known as whooping cough, is a serious infection that causes coughing spells so severe that it can be hard to breathe. The disease can even lead to cracked ribs, pneumonia, or hospitalization. Protection from the pertussis vaccine that is given in early childhood wears off, so adolescents can get whooping cough. The illness is usually mild in them (they may never know they had it), but adolescents are common transmitters of the infection to infants, who are at the highest risk of death. In fact, a whooping cough outbreak that occurred in California in 2010 was responsible for the death of 10 infants and the largest number of cases in California in nearly 50 years. Diphtheria is rare in the US, however, it still exists in other countries and can pose a serious threat to any American not fully immunized who travels abroad or who has contact with infected foreigners in the US. Diphtheria causes a thick covering in the back of the throat. It can lead to breathing problems, paralysis, heart failure, and even death. Tetanus, sometimes called “lockjaw,” is an infection of the nervous system. It causes severe muscle spasms that can lead, among other things, to “locking” of the jaw so the patient cannot open his/her mouth or swallow.
Coal is a sedimentary rock made from the compressed remains of ancient swamp plants. It is a fossil fuel widely used for heat and electricity in the US and abroad. Although it provides a seemingly cheap energy source, coal harms human health at every stage of its life cycle – except when it is left in the ground. Where does coal come from? Coal seams are abundant in the US. Wyoming leads the nation in coal production, accounting for approximately 40% of the coal that is mined in the US. Other top-producing coal states are West Virginia, Kentucky, Pennsylvania, and Montana. Coal is mined in 26 states, and total US production was just over 1,085 million short tons in 2010. Coal is found worldwide. The top 5 coal-producing countries are China, the USA, India, Australia, and South Africa. How much coal do we use? Based on preliminary data and estimates for the first quarter of 2013, U.S. coal exports, which had been steadily growing since 2009 on an annual basis, were down 1.3 million short tons compared with the same period in 2012. Coal plants generated 42% of the electricity produced in the US in 2011. Coal provides a larger share of our electricity supply than any other single source. In recent years, the share of coal-powered electricity has been declining, while that of natural gas has been on the rise–in 2011, natural gas provided about 25% of the electricity in the US. That share is expected to rise in coming years due to falling prices of natural gas and other factors. Coal and mercury Mercury is a naturally occurring element present in geologic formations, including coal. When coal is burned in power plants, this toxic element is released into the atmosphere. From there, it can travel hundreds of miles in clouds and falls to the earth in the form of rain. Through this process, mercury enters oceans, rivers, and lakes, and their corresponding aquatic ecosystems. Mercury accumulates in fish, where it is consumed by people. Mercury is a potent neurotoxin that damages the brain, heart, kidneys, and lungs. It is estimated that 1 in 6 women of childbearing age in the US have blood mercury levels that could damage the developing brain of her fetus. Coal fired power plants are the largest source of airborne mercury emissions in the US. The bulk of this pollution could be eliminated with widely available pollution control devices. The societal cost of mercury pollution in fish has been estimated by researchers at Mount Sinai School of Medicine to be $8.7 billion annually due to lost IQ and productivity. Coal and other health effects In addition to mercury, coal combustion releases a slew of dangerous pollutants into the air, including sulfur dioxide, nitrogen dioxide, and particulate matter. Coal fired power plants contribute to the formation of ozone, which, in the presence of heat and sunlight, is created by chemical reactions between nitrogen oxides (NOx) and volatile organic compounds (VOCs). These pollutants can travel hundreds of miles, crossing state lines and damaging air quality far from their original source. Exposure to these pollutants in the ambient air at levels common in the US have been linked to significant health problems: asthma attacks, asthma development, chronic obstructive pulmonary disease, lung cancer, stroke, heart disease, and heart attacks. Many of these pollutants can be reduced at the smokestack through the installation of modern, readily availably pollution control technology. Coal and climate change Coal combustion produces more air pollution, and more planet-warming carbon dioxide emissions in particular, than other fossil fuel energy sources such as natural gas. It is a “carbon-intensive” fuel, accounting for approximately 20% of all greenhouse gas emissions, worldwide. In the US, coal accounts for 27% of all greenhouse gas emissions. This means that coal combustion for electricity is a major driver of US greenhouse gas emissions. Limiting carbon dioxide emissions from coal plants is key to reducing our global contribution to greenhouse gases. EPA’s proposed Carbon Rule uses provisions of the Clean Air Act to limit carbon dioxide emissions from power plants in the future. Coal ash: a toxic legacy After coal is burned, a waste product laced with toxic chemicals remains. Each year, 140 million tons of this toxic waste, or coal ash, is produced. It is disposed of in ponds, landfills, and abandoned mines. Coal ash contains neurotoxic, carcinogenic, and otherwise poisonous substances such as arsenic, lead, and hexavalent chromium. It is not regulated by any federal agency, and state regulations governing its disposal are lax or nonexistent. Inadequately secured coal ash disposal sites can leak or spill, endangering surrounding communities and contaminating groundwater. In 2008, 525 million gallons of toxic coal ash sludge spilled into the Tennessee River, drawing national attention to the issue of improperly secured waste ponds. The spills continue. In 2011, for example, coal ash broke an impoundment and spilled directly into Lake Michigan from a power plant in Wisconsin. The EPA has identified over 600 coal ash disposal sites across the country. For a map of some of these sites, see here. For a map of disposal sites known to have had leaks, spills, or groundwater contamination problems, see here. Black lung: a disease on the rise In addition to pollution resulting from coal combustion and coal ash disposal, coal mining also leads to serious health problems among miners. “Black lung” is the infamous coal miner’s disease, caused by inhaling coal dust, which leads to lung tissue scarring. Technology and prevention strategies have reduced the numbers of miners suffering from this incurable disease in the past four decades. However, recent research points to an increase in rates of the disease, due to miners working longer hours, changes in the composition of rock being mined, and increases in production. Where is coal burned? There are 1,400 coal-fired electricity-generating units at more than 600 plants across the country. The Sierra Club maps the location of about 500 coal plants across the country. Coal pollution does not respect state boundaries, however. It can travel hundred of miles, creating acid rain and smog and endangering the health of residents in other states. That is why the EPA is trying to limit the sulfur dioxide and nitrogen oxide pollution from coal plants through the Cross-State Air Pollution Rule. In addition to the hundreds of coal plants currently in operation, there are dozens more in the planning stages. Find out if there’s one planned in your state using this map from the Sierra Club. What is clean coal? “Clean coal” refers to technologies in development that could reduce dangerous carbon dioxide emissions by capturing them and storing them either in the earth or in the ocean. Geologic storage involves injecting the captured carbon dioxide a mile or more below the earth’s surface in deep saline reservoirs. Ocean storage involves injecting liquid carbon dioxide hundreds of meters beneath the surface of the ocean, where it theoretically would be absorbed into the water. Both strategies are in the planning and testing phases. Their safety, permanence, efficiency, and resulting ecological impacts are uncertain.
Common sense is based on ordinary personal experiences (that are then shared). Although often neglected in scholarly writings, it is the most widely spread way of acquiring knowledge, skills and understanding. Common sense essentially uses heuristic methods that enable drawing intuitive insights or tacit knowledge from our experience. Because of such a nature, common sense is best expressed through narratives (myths, stories, articles, movies), although its vocal supporters sometimes come from other fields (e.g. mathematician Thomas Reid and philosopher George E. Moore). SOME MISCONCEPTIONS ABOUT COMMON SENSE Common sense is less valid than other approaches - the success of science in particular has often led to a derogatory attitude towards common sense (sometimes labelled ‘folk psychology'). To show its apparent inferiority, the examples of people believing in the past that the Sun goes around the Earth or that the Earth is flat are often brought up. Common sense, indeed, can sometimes be wrong, but this cannot justify diminishing its value and importance. Most of the knowledge gained in such a way has at least a pragmatic validity. Other approaches, when they go against common sense, more often than not eventually appear to be mistaken. For example, during the reign of behavioural psychology many parents were indoctrinated to bring up children in the ‘scientific' manner, which appeared to be, at least in some instances, damaging for children and parents alike. Eventually, such ways of upbringing were abandoned and common sense prevailed again (even the wife of John Watson, who founded behaviourism, admitted that she was not a good behaviourist in this respect). Common sense is simplistic - in fact, common sense is probably the most intricate approach of all. This is because it deals with non-linear, complex systems. Linear systems may be more precise, but they are inevitably simplifications and therefore not fully adequate in many situations. Common sense is relativistic - common sense may, indeed, vary from individual to individual or from culture to culture to some extent, but it is often forgotten that what people share is much greater than what they do not. Common sense, stripped of its cultural idiosyncrasies, can be surprisingly universal. The differences are often the result of an adaptation to diverse (historical or present) circumstances.
A Penguin Conveyor Belt in the South Atlantic In our recent paper in Proceedings of the Royal Society B: Biological Science, Dr. Daniel Thomas and I attempted to unravel the biogeography of the extinct penguin species of Africa – that is, to figure out where they came from. There are two main hypotheses for the history of Africa’s penguin fauna. One is that they represent an endemic radiation. In this scenario, a single founding population of penguins (perhaps just a few individuals) arrive in Africa to find it a wide open swath of penguin paradise. With plenty of prey available in the cold waters of the Bengali current and ample safe rocky islands to nest on, these colonizing individuals could have rapidly multiplied. Over time, the original species could have split off into multiple species as selective pressures pushed for different traits. Endemic radiations are well-documented in birds, most famously in the case of Darwin’s Galapagos finches. In that instance, a single species of finch split into more than a dozen distinct species over the course of a few million years. Arriving in a near ecological vacuum, the founding finches evolved a range of different beak shapes and behaviors to exploit different food sources. It is not too hard to image the same thing happening as penguins arrived on a continent new to them, without any direct competitors. The second hypothesis is that the extinct penguin species arrived separately, in multiple waves of dispersal. Each species would thus have a separate ancestor on some other continent. Waves of dispersal are also common in island avifaunas. An amazing example is Hawaii’s assemblage of bizarre ducks and geese, now tragically almost entirely extinct. One of the few surviving species is the Nēnē, a descendant of wayward Canada Geese that became stranded on the islands. In the fossil record we find some stranger examples, including the giant “toothed” Moa-Nalos. These thundering flightless birds weighted up to 15 pounds and evolved from Mallard Ducks that gave up flight in favor of large size. Another intriguing example is Talpanas, a litter-foraging duck that was probably nearly blind and nocturnal, guiding itself with its powerful sense of smell. This species evolved from a Ruddy Duck-like ancestor. In order to test which hypothesis was more likely, we constructed an evolutionary tree of the South Africa penguins and fossil species from elsewhere. What we found is that two of the extinct species, Inguza predemersus and Nucleornis insolitus, were NOT close relatives of the living Blackfooted Penguin (the only species that breeds in Africa today). This rules them out as being either ancestors of the Blackfooted Penguin or part of an endemic radiation. In fact, Inguza predemersus and Nucleornis insolitus were not even particularly close relatives of one another, and so must have arrived in Africa separately rather than splitting off from a single ancestor. The waves of dispersal hypothesis wins out. At least three separate dispersals must have occurred. There may have been even more, because two other species of extinct penguins are known from Africa’s fossil record. Unfortunately, we know too little about them to guess where they belong on the evolutionary tree. If we later find out they are also related to other non-African penguins, we could have up to five dispersals on our hands. So, how did all these waves of penguins make it to Africa? It seems like ocean currents played a big role. One major circulation system in the southern oceans is the South Atlantic Gyre. This system of currents creates a huge counterclockwise flow that may have served as a “penguin conveyor belt” from South America to South Africa. Penguins have been in South America for at least 40 million years, and this continent was identified as the most likely area of origin for the ancestors of the African penguins in our analysis. One possible scenario involves penguins from the South American coast getting caught up in the Brazil Current while foraging out at sea, and swept away from the coast. From here they could become entrained in the east-flowing South Atlantic Current and after a long journey (penguins can survive at sea for months at a time) the Benguela Current could have swept them back up the coast of Africa to dry land. We propose that this type of current-aided dispersal happened may times, and that currents are the main reason why Africa has penguins today, while Madagascar, which is surrounded by unfavorable currents pushing southward and away from the coast, does not.
|Real teachers. | Technical communication is the process of creating, designing, and transmitting technical information so that people can understand it easily and use it effectively and efficiently. This course will teach students the established basics for effective written composition in the business world and introduce them to such types of communication as processes, description of mechanisms, proposals, and reports. Students utilize usage exercises, quizzes, and a final usage exam to reinforce sentence clarity and effectiveness. Each student will receive skill-appropriate, personal feedback and instruction from an experienced, qualified writing instructor. © EnglishOnline.Net | Privacy
Carbocation rearrangements are extremely common in organic chemistry reactions are are defined as the movement of a carbocation from an unstable state to a more stable state through the use of various structural reorganizational "shifts" within the molecule. Once the carbocation has shifted over to a different carbon, we can say that there is a structural isomer of the initial molecule. However, this phenomenon is not as simple as it sounds. Whenever alcohols are subject to transformation into various carbocations, the carbocations are subject to a phenomenon known as carbocation rearrangement. A carbocation, in brief, holds the positive charge in the molecule that is attached to three other groups and bears a sextet rather than an octet. However, we do see carbocation rearrangements in reactions that do not contain alcohol as well. Those, on the other hand, require more difficult explanations than the two listed below. There are two types of rearrangements: hydride shift and alkyl shift. These rearrangements usualy occur in many types of carbocations. Once rearranged, the molecules can also undergo further unimolecular substitution (SN1) or unimolecular elimination (E1). Though, most of the time we see either a simple or complex mixture of products. We can expect two products before undergoing carbocation rearrangement, but once undergoing this phenomenon, we see the major product. Whenever a nucleophile attacks some molecules, we typically see two products. However, in most cases, we normally see both a major product and a minor product. The major product is typically the rearranged product that is more substituted (aka more stable). The minor product, in contract, is typically the normal product that is less substituted (aka less stable). The reaction: We see that the formed carbocations can undergo rearrangements called hydride shift. This means that the two electron hydrogen from the unimolecular substitution moves over to the neighboring carbon. We see the phenomenon of hydride shift typically with the reaction of an alcohol and hydrogen halides, which include HBr, HCl, and HI. HF is typically not used because of its instability and its fast reactivity rate. Below is an example of a reaction between an alcohol and hydrogen chloride: GREEN (Cl) = nucleophile BLUE (OH) = leaving group ORANGE (H) = hydride shift proton RED(H) = remaining proton The alcohol portion (-OH) has been substituted with the nucleophilic Cl atom. However, it is not a direct substitution of the OH atom as seen in SN2 reactions. In this SN1 reaction, we see that the leaving group, -OH, forms a carbocation on Carbon #3 after receiving a proton from the nucleophile to produce an alkyloxonium ion. Before the Cl atom attacks, the hydrogen atom attached to the Carbon atom directly adjacent to the original Carbon (preferably the more stable Carbon), Carbon #2, can undergo hydride shift. The hydrogen and the carbocation formally switch positions. The Cl atom can now attack the carbocation, in which it forms the more stable structure because of hyperconjugation. The carbocation, in this case, is most stable because it attaches to the tertiary carbon (being attached to 3 different carbons). However, we can still see small amounts of the minor, unstable product. The mechanism for hydride shift occurs in multiple steps that includes various intermediates and transition states. Below is the mechanism for the given reaction above: Hydration of Alkenes: Hydride Shift In a more complex case, when alkenes undergo hydration, we also observe hydride shift. Below is the reaction of 3-methyl-1-butene with H3O+ that furnishes to make 2-methyl-2-butanol: Once again, we see multiple products. In this case, however, we see two minor products and one major product. We observe the major product because the -OH substitutent is attached to the more substituted carbon. When the reactant undergoes hydration, the proton attaches to carbon #2. The carbocation is therefore on carbon #2. Hydride shift now occurs when the hydrogen on the adjacent carbon formally switch places with the carbocation. The carbocation is now ready to be attacked by H2O to furnish an alkyloxonium ion because of stability and hyperconjugation. The final step can be observed by another water molecule attacking the proton on the alkyloxonium ion to furnish an alcohol. We see this mechanism below: Not all carbocations have suitable hydrogen atoms (either secondary or tertiary) that are on adjacent carbon atoms available for rearrangement. In this case, the reaction can undergo a different mode of rearrangement known as alkyl shift (or alkyl group migration). Alkyl Shift acts very similarily to that of hydride shift. Instead of the proton (H) that shifts with the nucleophile, we see an alkyl group that shifts with the nucleophile instead. The shifting group carries its electron pair with it to furnish a bond to the neighboring or adjacent carbocation. The shifted alkyl group and the positive charge of the carbocation switch positions on the moleculeReactions of tertiary carbocations react much faster than that of secondary carbocations. We see alkyl shift from a secondary carbocation to tertiary carbocation in SN1 reactions: We observe slight variations and differences between the two reactions. In reaction #1, we see that we have a secondary substrate. This undergoes alkyl shift because it does not have a suitable hydrogen on the adjacent carbon. Once again, the reaction is similar to hydride shift. The only difference is that we shift an alkyl group rather than shift a proton, while still undergoing various intermediate steps to furnish its final product. With reaction #2, on the other hand, we can say that it undergoes a concerted mechanism. In short, this means that everything happens in one step. This is because primary carbocations cannot be an intermediate and they are relatively difficult processes since they require higher temperatures and longer reaction times. After protonating the alcohol substrate to form the alkyloxonium ion, the water must leave at the same time as the alkyl group shifts from the adjacent carbon to skip the formation of the unstable primary carbocation. Carbocation Rearrangements for E1 Reactions E1 reactions are also affected by alkyl shift. Once again, we can see both minor and major products. However, we see that the more substituted carbons undergo the effects of E1 reactions and furnish a double bond. See practice problem #4 below for an example as the properties and effects of carbocation rearrangements in E1 reactions are similar to that of alkyl shifts. 1,3-Hydride and Greater Shifts Typically, hydride shifts can occur at low temperatures. However, by heating the solutionf of a cation, it can easily and readily speed the process of rearrangement. One way to account for a slight barrier is to propose a 1,3-hydride shift interchanging the functionality of two different kinds of methyls. Another possibility is 1,2 hydride shift in which you could yield a secondary carbocation intermediate. Then, a further 1,2 hydride shift would give the more stable rearranged tertiary cation. More distant hydride shifts have been observed, such as 1,4 and 1,5 hydride shifts, but these arrangements are too fast to undergo secondary cation intermediates. Carbocation rearrangements happen very readily and often occur in many organic chemistry reactions. Yet, we typically neglect this step. Dr. Sarah Lievens, a Chemistry professor at the University of California, Davis once said carbocation rearrangements can be observed with various analogies to help her students remember this phenomenon. For hydride shifts: "The new friend (nucleophile) just joined a group (the organic molecule). Because he is new, he only made two new friends. However, the popular kid (the hydrogen) glady gave up his friends to the new friend so that he could have even more friends. Therefore, everyone won't be as lonely and we can all be friends." This analogy works for alkyl shifts in conjunction with hydride shift as well. - Vogel, Pierre. Carbocation Chemistry. Amsterdam: Elsevier Science Publishers B.V., 1985. - Olah, George A. and Prakash, G.K. Surya. Carbocation Chemistry. New Jersey: John Wiley & Sons, Inc., 2004. - Vollhardt, K. Peter C. and Schore, Neil E. Organic Chemistry: Structure and Function. New York: Bleyer, Brennan, 2007. Answers to Practice Problems - Jeffrey Ma
Protein biosynthesis is the process in which cells build proteins. The term is sometimes used to refer only to protein translation but more often it refers to a multi-step process, beginning with transcription and ending with translation. Protein biosynthesis although very similar, differs between prokaryotes and eukaryotes. Main article: Transcription Transcription only requires one strand of the DNA double helix. This is called the coding strand. The transcription starts with initiation. RNA polymerase, an enzyme, binds to a specific region on the DNA, marking the starting point, called the promoter. As the RNA polymerase binds on to the promoter, the DNA strands are beginning to unwind. The second process is known as elongation. As the RNA polymerase travels through the strand that is opposite to the coding strand (the cell wants a copy of the coding strand, so it needs to copy that from the DNA that is the opposite of the coding strand), it matches corresponding mRNA nucleotides to the DNA. As the polymerase reaches the termination stage, modifications are required for the newly transcribed mRNA to be able to travel to the other parts of the cell, including cytoplasm and endoplasmic reticulum. A 5’ cap is added to the mRNA to protect it from degradation. A poly-A tail is added on the 3’ end for protection and as a template for further process. Main article: Translation During translation, the message of mRNA is decoded to make proteins. Initiation and elongation occur when the ribosome recognizes the starting codon on the mRNA strand and binds to it. The ribosome has sites, which allow another enzyme, tRNA to bind to the mRNA. On tRNA, there is an anticodon that is used to match the codon on the mRNA. tRNA also has a single unit of amino acid attached to it. As the ribosome travels down the mRNA one codon at a time, another tRNA is attached to the mRNA at one of the ribosome site. The first tRNA is released, but the amino acid that is attached to the first tRNA is now moved to the second tRNA, and binds to its amino acid. This translocation continues on, and a long chain of amino acid (protein), is formed. When the entire unit reaches the end codon on the mRNA, it falls apart and a newly formed protein is released. This is the termination. It is important to know that during this process, many enzymes are used to either assist or facilitate the whole procedure. The events following biosynthesis include protein folding and post-translational modification. During and after synthesis, polypeptide chains often fold to assume, so called, native secondary and tertiary structures. This is known as protein folding. Many proteins undergo post-translational modification. This is may include the formation of disulfide bridges or attachment of any of a number of biochemical functional groups, such as acetate, phosphate, various lipids and carbohydrates. Enzymes may also remove one or more amino acids from the leading (amino) end of the polypeptide chain, leaving a protein consisting of two polypeptide chains connected by disulfide bonds.
Qualities of a Good Scientist The qualities of a good scientist may vary to some extent with different specialties. But, every scientist needs to have a good foundation in science classes throughout high school and college, along with a good understanding of math. These basic classes give you a good start toward science careers. - Two of the most common characteristics of scientists are curiosity and patience. Scientists are curious about the world around them, and they yearn to learn what makes everything work. Their inquisitiveness keeps them going ahead to the next project and the next experiment. - They also must have patience to undergo the years of work that might be required to make a discovery in a scientific field. A sense of optimism keeps a scientist performing experiment after experiment, even if most of them fail. Scientists know that failed experiments provide answers as often as successful ones do. Scientists require patience to repeat experiments multiple times to verify results. - Scientists need to be detail oriented, noticing even tiny observations and remembering and recording them. Their minds tend to be analytical, and they can categorize data in an efficient way so it can be recalled later. They usually have facts and hypotheses from several fields and experiments tucked into their memories so that they can be put together in different combinations to answer questions or provide direction for research. - Being open-minded is crucial for successful people in science careers. A good scientist will accept whatever outcome his or her work has and not try to force the results into a preformed opinion. A scientist also has good ethics and will not give false results or shade an experiment to fulfill the expected outcome. He or she will accept the solutions of others, even when they conflict with his or her own. Most people think of science as an uncreative field, but in reality, scientists must be very creative. They ask why something happens or what happens and then devise experiments to answer the question. Their creativity allows them to think outside the box and envision things that cannot be seen. They must be ready to give up old ideas when new ones come along. The best scientists are tenacious and determined. They realize that their life’s work may take decades to reach a conclusion and that they may be starting in the wrong way and need to change course. They also understand that, as they build on the work of scientists from past generations, their own work will likely be proven false by future scientists. Scientists must be able to work as part of a team or to work independently, depending on the need of the project. They must be able to communicate effectively, both in writing and in speaking. People in science careers often work alone, but they also must have good networking skills. Joining and participating in scientific associations strengthen the ties in the scientific community, and scientists can help and support one another. A good scientist has the capability of explaining scientific ideas to a person who is not a scientist. People working in science careers in industry or government must be able to cooperate and to accept management from those in higher positions. They should be able to accept assignments of work rather than going off on their own. The best scientists also have some understanding of other scientific fields. Interdisciplinary teams and work are becoming more common as the boundaries between fields such as astronomy and physics or biology and chemistry become less clear. People in science careers must have an outstanding understanding of math, including calculus and statistics. Math skills are often needed to present scientific findings, prove results, and show their importance. People in science careers often need a strong ability to concentrate even in the midst of noise, but they must also have the capability of dreaming about possible directions to try or experiments to perform. Good planning and time management skills are critical. A scientist must be able to plan a course of experiments and then manage the time to perform them. They also must have excellent recording skills, as the details and results of each experiment must be documented. Last Updated: 06/04/2014
It’s likely that we all know someone who experiences anxiety, and there’s no doubt that anxiety can be exhausting and can interfere with daily life. For children with autism, anxiety can occur more frequently and can be very intense. Seemingly simple daily activities such as leaving the house, interacting with peers, riding in the car, or taking public transportation can become increasingly difficult and anxiety provoking. In order to help children who may be experiencing anxiety, it is important for parents and teachers to understand anxiety and how it may be affecting children with autism. What are the links between autism and anxiety? It is estimated that 18% of the entire population has some form of anxiety disorder. There have been several studies that show varying results, but it is estimated that between 11% and 84% of people with autism also have an anxiety disorder. It’s not exactly known why these studies have shown such a wide variety of results, but there is some knowledge as to why anxiety may occur in people with autism at a higher rate than for the general population. Overlapping criteria. The characteristics of autism and anxiety can sometimes overlap. Often times, children who have autism and anxiety can display their anxieties through a variety of behaviors. These can include becoming over-stimulated, heavily dependent on schedules, self-injuring, outbursts of emotion, or becoming withdrawn. These behaviors are characteristics of both autism and anxiety, and when an individual has both, the anxiety symptoms can be intensified. Fear of how others are perceiving them. Some people with autism struggle with social skills such as eye contact, conversation, and reading body language or expressions. People with autism may develop anxiety because they fear that others may be criticizing them for their actions or struggles in social situations. They may feel as though they need to monitor their own actions, which can lead to more anxiety within social interactions due to overthinking and overanalyzing their own actions. Types of anxiety and what to look for There are many different types of anxiety. While any form of anxiety can occur in someone with autism, according to the Indiana Resource Center for Autism at Indiana University Bloomington, the most likely to occur are specific phobia, obsessive compulsive disorder, and social anxiety. Specific Phobia. Specific phobia occurs frequently in those who have autism. This type of anxiety is when a person has a fear of a certain object, place, or situation. For example, someone may be afraid of bees and avoids going outside in the summer, or someone may be afraid of toilets, cats, or amusement parks and avoid these places as well. Individuals with specific phobia will often avoid any situation where there is a chance they will encounter their fear, which leads to a very restricted lifestyle. In cases where the child has significant autism, caregivers should be aware of the environment when anxiety occurs. This can help pinpoint what is triggering the anxiety in children who may not be able to communicate their anxieties. Obsessive Compulsive Disorder. Repetitive and obsessive behaviors often characterize this type of anxiety disorder. People with OCD will often feel that if they do not perform a certain activity repeatedly or a certain number of times, something negative will happen. This disorder can interfere with daily life in a number of ways, including obsessive thoughts and compulsive actions that can overtake their life. Social anxiety. Social anxiety is seen frequently in children who have autism. This may be because many people with autism struggle with social interactions, which can include eye contact, conversation, social cues, and body language. Social anxiety can present itself in a variety of ways, including avoiding social situations altogether, becoming shaky or sweaty during social situations, or having a racing heartbeat. Five Ways to Help Your Child with Anxiety - Pinpoint what triggers anxiety. Identifying what causes the child’s anxiety can be helpful in identifying ways to help them. If the anxiety trigger is known, you can help the child cope with and possibly overcome their fears. Parents and teachers may encourage the child to engage in situations that are anxiety provoking (but safe), and praise or reward the child when they do so. - Visual Schedules and Transitions. Children with anxiety and autism often struggle to transition between activities at school and during daily life. The struggle to transition between activities can often be intensified if children are transitioning between a high-preference activity and an activity they do not enjoy. To help with this, many children can benefit from a visual schedule, which may include a picture of the activity and a time that the activity will occur. These schedules can help children know what to expect and in turn reduce anxiety levels. It may also be helpful to show the child a picture or video of transitioning smoothly to the next activity before doing so. The video or picture can provide a positive example of a smooth transition, but can also help the child know what is coming up next. - Safe spaces. For children who have anxiety, providing a safe space for them when they are feeling anxious or overwhelmed can be helpful. However, it is important to keep in mind that a safe space should not be used as a regular solution to anxiety. Rather, it should be used only when needed during extreme situations. If a safe space is overused they could become a way to escape daily life and activities in fear of anxiety triggers, which is not the intention. Often, a child who is experiencing social anxiety at school will become overwhelmed while interacting with peers in group settings, walking down a busy hallway, or eating in a noisy lunchroom. If a child is experiencing extreme anxiety from these things, a teacher may create a safe space that can include beanbags, calming games such as some puzzles, stress balls, or relaxing music. Different things will relax different children, so it’s important to keep the child’s individual needs in mind while creating a safe space. - Relaxation techniques. There’s a lot of recent research coming into schools about relaxation techniques such as meditation. Meditation has been shown to help many students reduce their anxiety levels, from test-taking anxiety to anxiety in daily life. However, meditation may not be the right fit for every student who is experiencing anxiety, which is important to keep in mind while searching for anxiety reducing techniques. Stages Learning has put together a helpful sheet on how to introduce and use meditation for children with autism. - Social Narratives Social narratives can be a great way for teachers and parents to show children situations before they happen. A social story can be as simple as a story about walking to the cafeteria or going to a grocery store, but the story should model what events will likely occur during the situation. By reading the social story before the event happens, children may feel less anxious about what is going to happen. It’s also an option to create a social story that shows something anxiety provoking, and modeling a way to overcome this. By modeling this through a social story, children may be more likely to handle the situation in a calm manner. Stages Learning Language Builder Sequencing Cards help children navigate transitions by providing visual cues as to what is going to happen next such as washing hands, brushing teeth, or what is involved in going to the grocery store. Because children with autism are frequently visual learners it can help children understand a social story by providing a set of cards indicating next steps. Stages Learning has examples of Social Narratives that you can download and use. Anxiety and Depression Association of America, (2016). Available: https://www.adaa.org/about-adaa/press-room/facts-statistics. Dubin, A., Lieberman-Betz, R., & Michele Lease, A. (2015). Investigation of Individual Factors Associated with Anxiety in Youth with Autism Spectrum Disorders. Journal Of Autism & Developmental Disorders, 45(9). Merrill, Anna. (n.d) Anxiety and Autism Spectrum Disorders. Indiana University Bloomington, Indiana Resource Center for Autism.
A malocclusion is an incorrect relationship between the maxilla (upper arch) and the mandible (lower arch), or a general misalignment of the teeth. Malocclusions are so common that most individuals experience one, to some degree. The poor alignment of the teeth is thought to be a result of genetic factors combined with poor oral habits, or other factors in the early years. Moderate malocclusion commonly requires treatment by an orthodontist. Orthodontists are dentists who specialize in the treatment of malocclusions and other facial irregularities. The following are three main classifications of malocclusion: Class I – The occlusion is typical, but there are spacing or overcrowding problems with the other teeth. Class II – The malocclusion is an overbite (the upper teeth are positioned further forward than the lower teeth). This can be caused by the protrusion of anterior teeth or the overlapping of the central teeth by the lateral teeth. Class III – Prognathism (also known as “underbite”) is a malocclusion caused by the lower teeth being positioned further forward than the upper teeth. An underbite usually occurs when the jawbone is large or the maxillary bone is short. Reasons for treating a malocclusion A severe malocclusion may lead to skeletal disharmony of the lower face. In a more extreme case, the orthodontist may work in combination with a maxillofacial dentist to reconstruct the jaw. It is never too late to seek treatment for a malocclusion. Children and adults alike have completed orthodontic realignment procedures and have been delighted with the resulting even, straight smile. Here are some of the main reasons to seek orthodontic treatment for a malocclusion: Reduced risk of tooth decay – A malocclusion often causes an uneven wear pattern on the teeth. The constant wearing of the same teeth can lead to tooth erosion and decay. Better oral hygiene – A malocclusion can be caused by overcrowding. When too many teeth are competing for too little space, it can be difficult to clean the teeth and gums effectively. It is much easier to clean straight teeth that are properly aligned. Reduced risk of TMJ – Temporomandibular jaw syndrome (TMJ) is thought to be caused by a malocclusion. Headaches, facial pains and grinding teeth during sleep all result from the excessive pressure to the temporomandibular joint. Realigning the teeth reduces pressure, and eliminates these symptoms. How is a malocclusion treated? A malocclusion is usually treated with dental braces. The orthodontist takes panoramic X-rays, conducts visual examinations, and takes bite impressions of the whole mouth before deciding on the best course of treatment. If a malocclusion is obviously caused by overcrowding, the orthodontist may decide an extraction is the only way to create enough space for the realignment. However, in the case of an underbite, crossbite or overbite, there are several different orthodontic appliances available, such as: Fixed multibracket braces – This type of dental braces consists of brackets cemented to each tooth, and an archwire that connects each one. The orthodontist adjusts or changes the wire on a regular basis to train the teeth into proper alignment. Removable devices – There are many non-fixed dental braces available to treat a malocclusion. Retainers, headgear and palate expanders are amongst the most common. Retainers are generally used to hold the teeth in the correct position while the jawbone grows properly around them. Invisalign® – These dental aligners are removable and invisible to the naked eye. Invisalign® works similarly to fixed dental braces but does not impact the aesthetics of the smile. If you have any questions about malocclusions, please contact our office.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
We started this series discussing the basic ingredients of the Universe: events, spacetime, causality. In the last chapter, we introduced massive objects (and thus, matter), which appear as a generalization of the so-called photon box. As it moves, any object traces a path. Physicists call it worldline. We established that a massive object never moves faster than light, so there are three types of trajectories in Spacetime, which receive adjectives: Time-like, if it moves slower than light (as massive objects do). Light-like, if it moves exactly at the speed of light (as massless particles do). Space-like, if the speed is faster than light (which is impossible for all). From another perspective, understood as a 4-dimensional entity, an object actually is its worldline. Then these distinctions are about different directions that a worldline can take within Spacetime. Descriptions in terms of speed are just easier for us, because we are used to perceiving a flow of time, but it is important to be careful about that. The emergence of Time The flow of time we perceive, pragmatically speaking, depends fundamentally on motion. As mentioned before, time can seemingly flow faster or slower depending on the observer. In fact, the faster a clock moves relative to you, the slower it appears to tick. Taking this to the extreme, a clock moving at the speed of light would appear frozen. This is because particles moving at the speed of light experience no time. Let us explain this by going back to the division of Spacetime into three regions presented in Part 2, which was based on the idea that causality determines the distinction between past and future. Now notice that if you move at the fastest speed possible, nothing can ever reach you, so there is no “past” region. Additionally, you cannot communicate anything anywhere, because by the time your causal influence arrives, you are already there as well. Thus there is no future either. From a massless particle’s perspective, all of Spacetime is space. The familiar smooth flow of time emerges as these massless particles are bound in what we think of as material objects, which move slower than the speed of light. This is why time happens for the atom in a way that it does not for the atom’s massless components. Having mass and experiencing time are fundamentally connected. The photon clock In order to visualize more easily how time emerges and flows faster or slower, let us consider a simpler version of the photon box: the photon clock 1 . Imagine two perfect mirror walls with a single photon bouncing between them. This would act as a clock because the speed of light is the same for all observers and they all agree on events too, such as those instances when the photon bounces off a wall. They can be used to define the clock’s ticks. When the clock is static, the photon bounces up and down in a regular pattern. But if it moves sideways, the photon has to move horizontally as well as vertically, thus covering more distance but at the same speed (because the speed of light is the same always), which makes the clock tick more slowly. From your point of view, that is. Any observer riding along with this clock would see it ticking at the normal rate and bizarrely, see your own static clock ticking more slowly because it is moving relative to them. This is the pragmatic, practical way in which time dilation happens. Indeed, as we already stated in Part 1, time is not a fundamental universal quantity, each observer measures their own time. The Lorentz transformation 2 relates the way time changes from the moving observers’ perspectives. From the 4-dimensional perspective, though, both of the clocks’ worldlines have time-like trajectories in different directions in Spacetime, where no direction is preferred over another. The Lorentz transformation simply places different time and space axes in a mathematical diagram of Spacetime, corresponding to each observer’s subjective distinction of time and space. Notice that the internal photon of the clock still travels in fixed directions in Spacetime (either left or right) due to the invariance of the speed of light, and as it bounces between the walls, these inner paths look like zig-zag trajectories 3. The photon clock is an analogy for something real. We already covered the fact that real matter is comprised of massless light-speed components confined, not by imaginary walls, but by interactions with other particles and force fields. This is an interpretation we can take even for the most elementary components of the atom, which behave like the photon clock. In fact, the most accurate clocks in the world, atomic clocks, are physical realizations of its basic description. The connection between mass and time Just like a photon clock confines a bouncing light-like worldline, any massive object can be seen as an extremely complex ensemble of light-like worldlines confined in equally complex ways. But it is only the ensemble that can travel slower than light, or be still. Its most elementary parts cannot do that and keep traveling at light speed in zig-zag trajectories. Concepts like stillness, mass, and time only make sense when taking together the evolving arrangement of many light-like paths 4, which are a manifestation of an object’s internal machinery of interactions among particles. At each interaction, particles exchange energy, charge and other properties that result in change in those particles. As a result, the configuration of the ensemble changes too. The ordered sequence of causes and effects transmitted by massless particles can be thought of as a series of light-like segments propagating causal connections at exactly the speed of light between infinitesimally nearby bits of the Universe. From this fundamental picture emerges another, which involves massive objects that change over time and transmit signals traveling slower than the speed of light. A note of caution is important, though. We are using an analogy in which the clock ticks become interactions among internal parts of atoms. In doing so, we are extrapolating the validity of these light-like segments into microscopic distances. The zig-zagging light rays description is still a meaningful and useful description of reality, but quantum physics makes it more complex. How exactly? We leave that for another entry in the series. Before that, we will look at how the ideas presented here apply to black holes. But for now, this is what the relative flow of time is all about. - M. von Laue, “Die nordströmsche gravitationstheorie”, Jahrbuch der Radioaktivität und Elektronik 14, 263 (1917) ↩ - H. Lorentz, “Electromagnetic phenomena in a system moving with any velocity smaller than that of light”, Proceedings of the Royal Netherlands Academy of Arts and Sciences, 6: 809–831 (1904) ↩ - R. F. Marzke & J. A. Wheeler, “Gravitation as geometry. I: The geometry of space-time and the geometrodynamical standard meter”, in Gravitation and Relativity, ed. W. A. Benjamin, Inc., New York, pp. 40–64 (1964) ↩ - G. J. Whitrow, “The natural philosophy of time”, ed. Nelson, London and Edinburgh, pp. xi, 324 (1961) ↩
Scientists hope that this discovery could lead to a tremendous medical breakthrough. It’s an amazing creation that could one day revolutionize how scientists create artificial hearts — a tiny swimming stingray that is actually entirely synthetic. This robot stingray could lead to the development of an artificial heart, according to a paper published in the journal Science. The robot stingray propels itself with living muscle cells and has been genetically modified to be controlled by a blue light that is flashed at it. But what does it have to do with artificial hearts? Today’s hearts are built with mechanical pumps, but this robotic stingray would show how one could use living muscle cells to allow it to behave more like a human heart, and even grow over time. Stingrays actually have a similar problem to the human heart in that they need to overcome problems involving fluid in motion. As the heart has to pump blood through the body, a stingray also has to pump itself through water. The lab had previously successfully created an artificial jellyfish, but the stingray proved more ambitious and required the use of rat cells. The finished product was no bigger than a nickel and had a transparent body made of silicone with a skeleton made of gold. It has 200,000 heart muscle cells from a rat, which were genetically altered to move on the command of flashed blue lights. “Inspired by the relatively simple morphological blueprint provided by batoid fish such as stingrays and skates, we created a biohybrid system that enables an artificial animal—a tissue-engineered ray—to swim and phototactically follow a light cue,” the abstract states. “By patterning dissociated rat cardiomyocytes on an elastomeric body enclosing a microfabricated gold skeleton, we replicated fish morphology at Embedded Image scale and captured basic fin deflection patterns of batoid fish. Optogenetics allows for phototactic guidance, steering, and turning maneuvers. Optical stimulation induced sequential muscle activation via serpentine-patterned muscle circuits, leading to coordinated undulatory swimming. The speed and direction of the ray was controlled by modulating light frequency and by independently eliciting right and left fins, allowing the biohybrid machine to maneuver through an obstacle course.”
Fun riddles for kids can serve as teaching tools for your student, whether they’re enrolled in an online school like Connections Academy, a traditional brick-and-mortar school, or they’re homeschooled. Whether you’re arming your elementary school student with a list of riddles, challenging them with brainteasers, or writing short riddles together—riddles for kids are fun learning activities that you can do not just from home, but anywhere! Riddles can potentially make kids smarter. They introduce children to functional thinking, which emphasizes seeing relationships between new ideas and previous knowledge to help them learn and remember faster and more easily. Functional thinking skills developed in the elementary grades are considered a gateway to algebra and other higher mathematics. So What is a Riddle? A riddle consists of a question and a surprise answer that relies on an unexpected interpretation of the question or a play on words (such as a pun): - RIDDLE: The more of them you take, the more you leave behind. What are they? - RIDDLE: What do you get when you cross an automobile with a domesticated animal? ANSWER: A carpet. What is the Difference Between a Joke and Riddle? A riddle is considered a joke when the person asked isn’t expected to know the answer, but instead the question is simply a set-up for the punch line. This is sometimes called a “conundrum” style of riddle. Elephant jokes are an example of conundrum riddles: - RIDDLE: What was the elephant doing on the freeway? ANSWER: About 5 mph. - RIDDLE: What gets wet when drying? ANSWER: A towel. - RIDDLE: Where do fish keep their money? ANSWER: In a riverbank, of course. Why Are Riddles Great For Kids? When age-appropriate, a riddle can help elementary students: - Sharpen their vocabulary/wordplay skills - Exercise critical thinking - Improve their reading comprehension - Enhance their creativity All children have a sense of humor and riddles are among the first forms of written humor that young children can truly understand, play along with, and initiate. Children who have a well-developed sense of humor are happier and more optimistic, have better self-esteem, and handle differences better (their own and others’). Riddles also create a bond. Your child will always remember when they first heard that silly riddle from you. And because riddles are meant to be retold, they give your elementary schooler something to share with others. Many children become friends through shared humor, and for some, being able to make other kids laugh is a talent they hone and find rewarding all their lives. Easy Riddles For Kids - RIDDLE: What is orange and sounds like a parrot? ANSWER: A carrot. - RIDDLE: 100 feet in the air, but its back is on the ground. What is it? ANSWER: A centipede on its back. - RIDDLE: I will bring you down, but I will never lift you up. What am I? - RIDDLE: What is something you always have with you, but you always leave behind? Rhyming Kids’ Riddles - RIDDLE: I can be cracked, I can be made. I can be told, I can be played. What am I? ANSWER: A joke. - RIDDLE: First, I was yellow, now I am white. Salty or sweet, I crunch with each bite. What am I? - RIDDLE: Glittering points that downward thrust, sparking spears that never rust. What are they? - RIDDLE: I have two arms, but fingers I have none. I’ve got two feet, but I cannot run. I carry well, but I carry best with my feet off the ground. What am I? ANSWER: A wheelbarrow. - RIDDLE: If I have it, I shouldn’t share it, because if I share it, I won’t have it. What is it? ANSWER: A secret. - RIDDLE: I live off of a busy street, if you want you can stay for an hour or two, but if you don’t pay rent, I’ll tell on you. What am I? ANSWER: A parking meter. Some riddles pose problems that teach deductive reasoning. They ask a reachable “How?” or “Why?” They are more about thinking than laughing. - RIDDLE: Two mothers and two daughters go to a pet store and buy three cats. Each gets her own cat. How is this possible? ANSWER: They are a grandmother, a mother, and a daughter. The grandmother is also the mother’s mother, so there are two daughters and two mothers, but only three people. - RIDDLE: While walking across a bridge I saw a boat full of people. Yet, there wasn't a single person on the boat. Why? ANSWER: Everyone on the boat was married. Tricky Riddles For Kids - RIDDLE: I might be far from the point, but I’m not a mistake. In fact, I fix yours. What am I? ANSWER: An eraser. - RIDDLE: I can be long, I can be short. I can be grown, I can be bought. I can be painted or left bare. I can be round or a little square. What am I? ANSWER: A fingernail. - RIDDLE: I’m at the beginning of time and part of the past, present, and future. I’m part of history, but not of here and now. In a moment you’ll find me, if you know what I am. What am I? ANSWER: The letter T. - RIDDLE: You can swallow me, but I can consume you too. What am I? - RIDDLE: What is it that you can keep after giving it to someone else? ANSWER: Your word. - RIDDLE: It is the beginning of eternity, the end of time and space, the beginning of the end and the end of every space. What is it? ANSWER: The letter E. - RIDDLE: What goes around the house and in the house, but never touches the house? ANSWER: The sun. - RIDDLE: What comes once in a minute, twice in a moment, but never in a thousand years? ANSWER: The letter M. How to Write a Riddle Many children will try their own riddles unprompted and are often surprisingly clever. Even at its simplest, creating a brainteaser requires strong writing skills—expressing a thought that only makes sense if you make the wordplay or skewed connection between question and answer understandable. Here’s how to write a riddle: - Come up with a punchline—the answer to a question. - Brainstorm words and phrases that go with your punchline. List things it (the answer/punchline) does or that you can do with it. - Choose a few words or phrases from your list and search for synonyms online (or use a thesaurus). Note some surprising, interesting, or new words and phrases you find and look for synonyms to them. Make lists. - Make another list of synonyms for your punchline word or phrase. - Be your punchline. Imagine yourself as the punchline. Think about its place in the world, how it acts, or how it is acted upon (used). - Make connections between your lists. - Use simile, comparisons that use “like” or “as.” - Use metaphors, phrases that describe the object symbolically, not literally. - Use onomatopoeia, words that sound like their meaning. Before you know it, you’ll see some concepts that go together in a funny, odd, or unexpected way, and be crafting a fun brainteaser for kids. Now you get the last laugh! Not all children excel at language arts. If the idea of writing riddles causes boredom, try whipping out some kitchen ingredients and build a fruit volcano instead to keep learning from home fun!
About Color Wheel Tool Using Color Theory When picking colors, one of the most common concerns is deciding which hues go together. The color wheel is a simple tool based on color theory that can help answer that question. Every decorative color combination can be defined by where it resides on the color wheel, a diagram that maps the colors of the rainbow.The color wheel makes color relationships easy to see by dividing the spectrum into 12 basic hues: three primary colors, three secondary colors, and six tertiary colors. Once you learn how to use it and its hundreds of color combinations, the color wheel can provide a helpful reference when deciding what colors to try in your design, home, etc. What is Color Wheel? A color wheel or color circle is an abstract illustrative organization of color hues around a circle, which shows the relationships between primary colors, secondary colors, tertiary colors etc A color wheel based on RGB (red, green, blue) or RGV (red, green, violet) is an additive color wheel; Alternatively, the same arrangement of colors around a circle with cyan, magenta, yellow (CMYK) is a subtractive color wheel. Most color wheels are based on three primary colors, three secondary colors, and the six intermediates formed by mixing a primary with a secondary, known as tertiary colors, for a total of 12 main divisions; some add more intermediates, for 24 named colors. Other color wheels, however, are based on the four opponent colors and may have four or eight main colors. How the Color Wheel Works Primary colors are red, blue, and yellow, these colors are pure, which means you can't create them from other colors, and all other colors are created from them.Secondary colors are between the equidistant primary color spokes on the color wheel: orange, green, and violet. These hues line up between the primaries on the color wheel because they are formed when equal parts of two primary colors are combined.Tertiary colors are formed by mixing a primary color with a secondary color next to it on the color wheel. With each blending (primary with primary, then primary with secondary), the result hues become less vivid. How to Use the Color Wheel to Build Color Schemes You can rely on the color wheel's segmentation to help you mix colors and create palettes with varying degrees of contrast. There are four common types of color schemes derived from the color wheel. Monochromatic Color Palette - Three shades, tones, and tints of one base color. Provides a subtle and conservative color combination. This is a versatile color combination that is easy to apply to design projects for a harmonious look. Although the monochromatic look is the easiest color scheme to understand, it's perhaps the trickiest to pull off. A design filled with just one color can feel boring or overwhelming, depending on how you handle it. Analogous Color Palette - For a bit more contrast, an analogous color scheme includes colors found side by side, close together on the wheel, such as orange, yellow, and green, for a colorful but relaxing feel. Neighboring hues work well in conjunction with each other because they share the same base colors. The key to success for this scheme is to pick one shade as the main, or dominant, color in a room; it's the color you see the most of. Then choose one, two, or three shades to be limited-use accent hues. This living room demonstrates an analogous scheme of blue, purple, and fuchsia. Complementary Color Palette - A complementary color scheme is made by using two hues directly opposite each other on the color wheel, such as blue and orange, which is guaranteed to add energy to any design. These complementary colors work well together because they balance each other visually. You can experiment with various shades and tints of these complementing color wedges that find a scheme that appeals to you. Split Complementary Color Palette - Alternatively known as a compound color scheme,split complementary color scheme consists of two opposite colors placed either side of one base color. This is less attractive than complementary, but just as effective. A good example is Taco Bell's logo, which consists of blue, purple and yellow colors. Triadic Color Palette - Triadic color scheme is made by three colors that are evenly spaced on the color wheel, which provides a high contrast color scheme, but less so than the complementary color combination — making it more versatile. This combination creates bold, vibrant color palettes. Tetradic Color Palette - A tetradic color scheme is a special variant of the dual color scheme, with the equal distance between all colors. All four colors are distributed evenly around the color wheel, causing there is no clear dominance of one color. Tetradic color schemes are bold and work best if you let one color be dominant, and use the others as accents. The more colors you have in your palette, the more difficult it is to balance. Make the Color wheel square! A new feature of the color wheel tool for you is to use a square color wheel (I think it might be called a color cube :D). In this section, like the circular section, you can have the color wheel as Monochromatic mode, Complementary mode, Square mode, Cool-colors mode, and Warm-colors mode. In each section, select the desired color by flicking small circles inside the square, or enter the hexadecimal code of the desired color. And even increase or decrease the number of colors you want. Finally, like, share, or save the desired palette. And this way, you can find colors that match. Let's see how this works?!
X-rays are an integral part of medical as well as dental health management. They play an important role in the diagnosis and treatment of various diseases. There is lot of concern among the general population regarding the harmful effects of x-rays; but they are safe if standard guidelines and safety measures are followed. Dentists with the help of x-rays can observe and detect signs of disease which are normally not visible by the routine examination. The x-rays or radiographs help the dentists in making accurate assessment of the disease process, and the condition of the underlying structures and bones can also be noted. There are many uses of x-rays; they help in diagnosing abscesses, cysts, and tumors. Impacted, unerupted, missing, or extra teeth can be visualized. In periodontal disease, the location, severity, and the depth of the cavities can be accurately noted. By using x-rays, the condition of teeth, soft tissue, surrounding bones, etc. can also be seen. Without x-rays, the correct diagnosis can be missed very easily. A full series of x-rays is taken of all adult new patients and kept as record for future reference. The number of x-rays, interval between them and follow-up directions are determined by the dentist according to the type and severity of the disease. Multiple x-rays are needed in some conditions like root-canal treatment, periodontal disease, etc. The use of radiographs in children is done taking into account their growth and development. Tags: anti-cavity, NHS dentistry, oral health, oral hygiene
What is a Macroeconomic Factor? A macroeconomic factor is a financial characteristic, trend, or condition that applies to a broad aspect of an economy, such as inflation, rather than a certain population. Instead of affecting individuals, macroeconomic factors usually impact large populations and therefore are monitored by consumers, businesses, and governments. Positive Macroeconomic Factors Positive macroeconomic factors include events that help a country (or group of countries) stimulate economic expansion and stability. More specifically, any economic development that leads to increased demand in goods or services is positive. This can lead to increased revenue and potentially higher profits and increased incomes and consumer satisfaction. A common example is when fuel prices go down, people have more discretionary money and also tend to drive more, which leads to more consumption. It's also less expensive to have items shipped, which means products can be less expensive, further increasing demand. Neutral Macroeconomic Factors Neutral macroeconomic factors are events that have a neutral effect, or ones that don't sway the economy positively or negatively. Instead, the consequences will be determined based on what the intentions and reactions are, like trade regulations between countries. For instance, adding duties or tariffs, or other types of regulatory policies can have a variety of results, depending on how the countries involved respond. Negative Macroeconomic Factors Negative macroeconomic factors are events that can endanger domestic or global economies. For instance, political instability may lead to economic unrest because of the need to reallocate resources, and potential damages to assets, livelihoods, and property. As another example, a crash in one sector of the economy can widely affect the economy as a whole, such as the housing crash in 2008. Poor lending practices led to a rash of foreclosures which had a severe negative impact on the US economy as a whole. Examples of Macroeconomic Factors Common measures of macroeconomic factors include gross domestic product, the rate of employment, the phases of the business cycle, the rate of inflation, the money supply, the level of government debt, and the short-term and long-term effects of trends and changes in these measures. Gross Domestic Product The GDP, or gross domestic product, measures the market value of goods and services created over a predetermined time frame. Think of it as a view of the productivity of the economy (domestic or global) at a given point in time. Macroeconomists usually use real GDP -- this measurement reflects price changes and takes inflation into consideration. The GDP isn't 100% accurate but offers estimates that economists and investors can use to analyze business cycles -- periods of time between economic expansions and recessions. They can then look at the reasons these cycles took place, whether that's due to consumer behavior, global phenomena, or government policy. The GDP can also be used to compare it across different economies. The unemployment rate reveals the number of people in the labor force who can't find employment. When an economy sees growth the unemployment rates are generally low. With growing GDP levels, that is, increasing productivity, more workers are usually required to support the increased output. These new employees now have more income, so they spend more. They might go on more vacations, buy new homes, upgrade their personal belongings, etc. This creates demand in other areas of the economy and those companies need to also hire more people which, in turn, also contributes to a lower unemployment rate. If the economy produces less (GDP goes down), it usually indicates that fewer employees are needed. This affects incomes and eventually consumption. Inflation affects the entire economy because it indicates that the value of the currency is decreasing. This affects everyone's decisions about consumption and savings, as well as production planning. The inflation rate is measured using the GDP deflator or the Consumer Price Index (CPI). The CPI offers a snapshot of current prices of certain goods and services, whereas the GDP deflator is the ratio between the nominal GDP (only looks at changes in price) to the real GDP (prices that factor in inflation). If the nominal GDP is higher than the real GDP, that shows that there has been inflation since the base year of real GDP. The money supply is a measure of the amount of money in circulation. The more economic activity there is, the more money is required to support it. The money supply is measured as liquid instruments (including all cash and deposits) in a country's economy at a given point in time. A central bank may increase the money supply to offset the growing demand for money and rising interest rates in a growing economy. It may also increase the money supply to stimulate economic growth. When the money supply increases, businesses tend to increase production because of increased consumer spending due to lower interest rates, driving up profits and demand for labor. Government Debt Levels If government debt levels are high, the chances of a nation's standard of living may decrease, as tax revenue goes towards debt payments rather than government services. Increased government borrowing can also push up interest rates in general, which makes consumption more expensive. In less stable countries, the increased debt can make it riskier and thus costlier to do business in the country.
Fireflies Light the Way to Better LEDs Nature has inspired researchers to make brighter, more efficient LEDs, by copying the structure of a firefly's rear end. Writing in the journal PNAS, Ki-Hun Jeong and colleagues at the Korea Advanced Institute of Science and Technology noted that fireflies rely on having bright, coherent lights, or lanterns, in order to find a mate. We can assume that there would be an evolutionary pressure to make the light as bright as possible for the least energy consumption - those who could outshine others would be more likely to find a mate, and the more efficient the process, the more likely they would still have the energy to do something about it when they do find one. The biochemisty of firefly light production, bioluminescence, is well understood - an enzyme called Luciferase acts on a chemical called luciferin, causing it to emit a yellow light. But how this light then escapes the lantern, the light producing organ, is not well studied. So, to find out how Nature makes the most of this reaction, the researchers looked at the nanoscale structure around a firefly's lantern using electron microscopy. They saw that the surface layer, or cuticle, was very different around the lantern compared with the rest of the abdomen. Over most of the body, the surface layer was amorphous, and had no identifiable structure or pattern. Around the bioluminescent organ, however, the outer layer was ordered in neat rows. On further investigation, they realised that this structure was reducing the difference in refractive index between surface and air, reducing the amount of refraction and scatter, and therefore allowing more of the light to shine outwards. In an example of biomimetic design, the researchers then manufactured LEDs with a similar nano-ridged curved surface layer across the top. They then compared the intensity of light given out by these adapted LEDs with similar LEDs that had a smooth surface layer. The firefly inspired LEDs were consistently brighter than the normal counterparts, with an increase of up to 3%, across a range of light colours. To get that sort of increase from an LED today, you need to invest in expensive anti-reflective coatings. Perhaps unsurprisingly, the frequency of light best transmitted matched the yellow light of the firefly. A 3% increase may not sound like much, but it could see a cut in electricity bills, and will spur the development of low-cost, high efficiency LEDs for domestic, commercial and automobile lighting; hand held devices, and screens for LED televisions and monitors.
Barbie Plant Science Kit Grow veggies and flowers, and learn lessons in plant biology with fun, hands-on experiments Barbie™ and her friends take a trip to a vegetable and flower farm. While at the farm, they conduct a number of experiments with plants and learn many lessons in plant biology. As you read the story, you can perform experiments alongside Barbie to learn about plants. Experiment with water-absorbent soil pellets and set up a small greenhouse to grow garden cress. Observe the effects of water, soil, sunlight, temperature, and contamination on the plants. Plant peas and make a maze for your pea plants to follow. Grow four different types of flowers from seeds: marigold, zinnia, phlox, and dianthus. Set up a plant stand for your potted flowers. Make seed bombs and decorate plaster garden stones to adorn your greenhouse and potted plants. The kit includes a 16-page illustrated storybook manual, greenhouse, six types of plant seeds, plant pots and dishes, soil pellets, and other tools and materials for the experiments.
Reading from text, Lecture 18 - 11.6 - Here the point is estimating individual points. Given a particular x-value, what is our estimate of the y-value. - 11.7 - Remember, using the simple linear equation as a model is itself an - 11.8 - The Analysis of Variance approach is another way of looking at that test of whether or not the slope of the line is zero. In that sense it is less general than what came before, but it is a useful way of conceptualizing what is usually considered to be the important point. - 11.9 - Even more useful is a design where the independent variable has values that are used more than once: a designed experiment, with "groups". Now we can really test whether or not the model is a good fit. - 11.10 - Data transformations can serve a couple of useful purposes. One is to take data that probably come from a non-linear function and make the transformed data be from a linear function, so we can use all the machinery we just developed. Another is to make the data conform to the "equal variances" assumption. Also in this section are plotting procedures like the ones we have been using, and for the same kinds of reasons. They are based on using the residuals which, remember, we said act somewhat like the "scores" from before. - 11.11 - Case study - 11.12 - How to calculate correlation, which (remember back to The Algebra of Expectation?) is a measure of the linear relationship between two variables. Multiple Linear Regression - 12.1 - Notice what makes a model "linear", and what would make it non-linear. - 12.2 - Shows a couple of examples of MLR models - 12.3 - Shows how to set up the matrix version, and calculate the estimates of the coefficients - 12.4 - How to calculate the next thing you need: the estimate of &sigma2. Note that the ANOVA method once again only tests one thing: that all of the coefficients except the intercept equal zero. (Or, all slopes zero) - 12.5 - Tests of individual coefficients, confidence intervals for the mean at a particular combination of independent variable values, confidence intervals for individual points at a particular combination of independent variable values. - We will not go on to fitting models empirically (e.g. stepwise regression)
On October 31, 1517, Martin Luther published his 95 theses against Indulgences. This was the origin of the schism in the Church, which gave birth to the Reform. Luther’s ideas spread very quickly in Europe and France. Article : The Lutheran Reformation Calvin published “The Institutes of the Christian Religion” in Latin, preceded by an Address to King Francis I. The work, published in Basel, presents the theological and biblical foundations of the Reform and their consequences. He bases himself on Luther’s theology – justification by faith; salvation through grace – not without assigning it consequences that are often quite different, particularly concerning the organization of Churches, the liturgy, the relationship with the world. Other editions in French followed. Article : Jean Calvin’s doctrine This clandestine meeting of Protestants met in Paris and adopted the first Protestant Confession of Faith in France. Largely inspired by Calvin, this confession of faith was slightly modified to become the La Rochelle Confession of Faith (1571), which remains in our day one of the major texts of the French reform movement. The massacre, on March 1st, by the Duke de Guise’s troops of some hundred Protestants attending religious services in a barn located inside the ramparts of the city of Wassy (Champagne) and not outside as stipulated by the Edict of January 1562, is considered the event that triggered the first War of Religion. Article : The massacre of Wassy (1562) This is the emblematic event of the French Wars of Religion. On August 24, after the marriage of Henry of Navarre (the future Henry IV) and Marguerite de Valois (daughter of Catherine de Medici and sister of the king), most of the Protestant leaders present in Paris at time were assassinated by the Duke de Guise’s party. The situation degenerated into a widespread massacre, even extending outside the capital. Article : St. Bartholomew’s Day (24th August 1572) Having become King of France in 1589, after converting to Catholicism, Henri IV put an end to the Wars of Religion on April 3, 1598 by promulgating the Edict of Nantes. This edict establishes civil equality between Protestants and Catholics. The Edict of Nantes allowed the Protestant community to exist, but within the confines of regulations that, in fact, limited the practice of the reformed religion. It was the major act of Henry IV, which brought peace to France after thirty-six years of religious wars. Article : The Edict of Nantes (1598) After the death of Henry IV, a new dispute arose concerning the religious and political organization of the Béarn, the king’s personal property. In 1616, things deteriorated. Three new religious wars broke out and ended in 1629 with the Peace of Alès. Article : The last religious wars (1621-1629) This very busy port, which had become largely Protestant, represented a potential threat to the royal power and Richelieu due to the risk of an English landing. Richelieu laid siege to it in 1627. The city of La Rochelle surrendered in 1628, after a heroic resistance. Article : The last religious wars (1621-1629) After three religious wars, the peace of Alais deprived Protestants of safe havens, but confirmed their right to practice their religion within the framework of the Edict of Nantes. Article : The last religious wars (1621-1629) The Thirty Years War was a political and religious war that devastated the Holy Roman Empire of the German nation in the 17th century. First a religious conflict between the Protestant princes and the Catholic house of Hapsburg, it degenerated into a European war due to the intervention of the foreign powers, Sweden and France. The Treaty of Westphalia put an end to it in 1648, mainly to the benefit of Sweden and France. Article : The Thirty Year War (1618-1648) Louvois sent a cavalry regiment to Poitou to go into winter quarters. The Kings quarter master, Marcillac, housed them in Huguenot homes: he allowed them to pillage and ruin their hosts if they did not wish to convert. Recalcitrants were ill-treated and even tortured. A wave of conversions followed. This first dragonnade was the prelude to the general dragonnade of 1685 in the southern Loire, which was followed shortly after by the Revocation of the Edict of Nantes. Article : The “Dragonnades” (1681-1685) Decided by Louis XIV, this revocation on October 22, 1685 led to the increased repression of Protestants (death sentences and sentences to row the galleys, forced conversion, etc.). It amplified the emigration of French Protestants to the so-called Refuge countries of Europe (Prussia, England, Switzerland, the Netherlands). In 1702, The Abbot of Chaila was murdered on July 24 in Pont-de-Montvert. Repression was fierce in Languedoc in the Cévennes. A desperate armed revolt then broke out. It officially ended in 1704 with the negotiations conducted by the Maréchal du Villars in the name of the king and Jean Cavalier for the rebels. Sporadic outbreaks continued until the end of the decade. Article : The war of the Camisards (1702-1710) Antoine Court and Benjamin Duplan founded the Lausanne Seminary in Switzerland. All Protestant schools had been closed since the Revocation of the Edict of Nantes. This institution therefore aimed to train ministers who were sent to France to clandestine Protestant communities of the “Desert” in theology and practical service. Article : The Lausanne Theological seminary Jean Calas, a Protestant merchant from Toulouse, was sentenced by the Toulouse Parliament to torture on the wheel and was executed on March 10, 1762, on the unsubstantiated accusation of having murdered one of his sons who was reputed to have converted to Catholicism. Voltaire, informed of the “affair,” had the conviction overturned and Calas was exonerated in 1765. This affair remains the symbol of partisan injustice. Article : The Calas affair Two years before the Revolution, Louis XVI re-established the civil rights of Protestants when he promulgated the Edict of Tolerance on November 29, 1787. They could have their births, their marriages and their deaths recorded. But Protestants were still excluded from public office and there was no question of practicing their religion. At the start of the French Revolution, in August 1789, the National Assembly ratified the Declaration of the rights of man and of the citizen, whose article 10 proclaims that “no one shall be harassed for his religious opinions.” Religious freedom was recognized. Article : Religious Freedom Religious freedom is not synonymous with freedom of worship: collective practice, with possible outside events that might disrupt the peace. The 1791 Constitution established freedom of worship, which had to be circumscribed by law. On September 18, 1801, Napoleon Bonaparte signed the Concordat with the Pope. On April 8, 1802, he promulgated the Organic Articles that organize the life of the Catholic Church and the Protestant and Jewish religions. They provide in particular for the remuneration of the clergy by the State, the allocation and funding of places of worship and the representation of the communities. Article : The French Concordat On November 4, the bylaws of the Missionary Society, whose aim is to “spread the Gospel among heathens.” This was a revival movement, whose founding members were of various nationalities. Their personal wealth allowed them to retain their independence vis-à-vis the consistories of Paris. Very quickly, the Missionary Society initiated activities in France and in Africa, and its influence was considerable. Article : Missionary Societies This was the first national synod since the Revocation of the Edict of Nantes (1685). This synod marked a break with the orthodox and liberal currents of reformed Protestantism in France. Article : Times of disagreement For the first time, a Protestant theology school was established in Paris. It was the convergence of two movements: the desire to establish theology instruction in Paris and the arrival of two professors from Strasbourg, who did not want to be under German administration. An organization focused on evangelizing and social work in working class areas, founded in England, the Salvation Army was established in France in 1881 by Catherine Booth, sometimes nicknamed “the Marshal,” daughter of the Methodist minister William Booth, who founded the organization in London in 1878. Organized according to a “military” model, it currently employees around 3,500 people in France in more than 60 establishments. Article : The Salvation Army John Mott founded the World Christian Student Federation in New York City. An assembly of various youth groups, the WCSF defines itself as an ecumenical movement of openness, dialogue and training, raising students’ awareness of the problems that they may encounter in their working life. Article : Protestant women in the Fédé movement Christian Socialism was created at the initiative of a number of ministers, including Tommy Fallot. The aim was to confront Christian faith with the concrete realities of the social environment. In particular, this involves developing an active and ecumenical solidarity with the disadvantaged. The movement founded a journal, Christian Socialism, that then published articles by Elie Gounelle, Wilfred Monod or Charles Gide and many people concerned about social problems. Article : Social Christianity The law of December 9, 1905 concerning the separation of Church and State established and defined secularism in France. It guaranteed freedom of worship in the spirit of the Revolution of 1789, while giving it a legal framework and it organized the relationship between the secular Republic and the churches of the period. The French Protestant Federation was created, which brought together most of the Protestant churches and associations. Article : Separation of Church and State The different Protestant and Anglican missionary societies came together to avoid any competition in the work to evangelize the non-Christian world. Cradle of ecumenism, first predominantly Anglo-Saxon, then globalized, it led to the creation of the World Council of Churches 1948. This Protestant youth education movement focused on obedience was founded in 1909-1911, as a version of the scouting movement initiated by Lord Baden-Powell in 1907 in Great Britain. The movement now has around 5,600 members in France and is part of the French Scouting and International Scouting movements. Its success and its influence vary with societal changes. Article : Scouting and women On May 31, the ministers of the German evangelical church met as a clandestine synod in the suburbs of Wuppertal (Rhineland-Palatinate), in Barmen. They declared, in a confession of faith, drafted in part by Karl Barth: “…We reject the false doctrine, as though the church could and would have to acknowledge as a source of its proclamation, apart from and besides this one Word of God, still other events and powers, figures and truths, as God’s revelation…” They thus showed their opposition to the German Evangelical Church of Deutschen Christen imposed by Hitler and most particularly its Aryan paragraph. This was the beginning of the Confessing Church Article : Karl Barth (1886-1968) On March 26, minister Marc Boegner, in the name of the National Council of the Reformed Church in France for which he served as president, wrote a letter to the Grand Rabbi of France, Isaïe Schwartz, to express to him his solidarity following new anti-Semitic laws promulgated by the Vichy government: “Our Church, which has known suffering and persecution in the past, has an ardent sympathy for your communities which have seen their freedom of worship compromised in certain places and the members of which have been so abruptly struck by misfortune.” The founding assembly of the World Council of Churches (WCC) met in Amsterdam. This was the result of years of work begun during the Missionary Conference of Edinburgh in 1910 and carried out since within the Churches born of the Reform in two directions, Faith and Constitution, on the one hand, and practical Christianity, on the other. The Council federated these groups and welcomed the representatives of the Orthodox Churches. However the Catholic Church did not participate. The Rev. Marc Boegner and Rev. Wilhelm Wisser’t Hooft were President and Secretary General respectively. It was decided to establish the seat of the WCC in Geneva. Article : Protestantism around the world Albert Schweitzer (1875-1965), born in Strasbourg, was a theologian (professor at the theology school of Strasbourg) musician (famous organist), philosopher (a specialist in Kant and European religions), and also a physician who established and managed Lambaréné Hospital (Gabon). During his Nobel Prize acceptance speech, he took a stand against nuclear armament. Article : Albert Schweitzer (1875-1965) The signing in 1973 of the Concord of Leuenberg, a small Swiss city, was the result of discussions begun in 1960 between ministers and heads of Lutheran and Reformed Churches in Europe. It was recognized that the doctrinal differences of the 16th century between Lutheranism and Calvinism concerning Last Supper were obsolete and were no longer a valid reason for dividing the two churches. The Leuenberg Concord made the creation of the United Protestant Church of France possible in 2013. Article : French Reformed Church The French Ecumenical Translation of the Bible (TOB) was completed. The work conducted by Catholic, Reformed and Lutheran teams, with occasional help from the orthodox churches benefited from remarkable circumstances: the advances in biblical exegesis, the Second Vatican Council, the active assistance of the United Bible Societies and the Editions du Cerf publishing house (managed by the Dominicans). Article : What is the Bible ? The Council of Christian Churches in France (CECEF) is made up of delegations from Catholic, Protestant, Orthodox and Armenian Apostolic Churches. Its mission is to facilitate reflection and potentially common initiatives in three fields: Christian presence in society, service and testimony. The CECEF is co-chaired by the presidents of the first three delegations. The Lutheran Church of the Augsburg Confession of Alsace and Lorraine (ECAAL) and the Reformed Church of Alsace and Lorraine (ERAL) pooled services while retaining their concordat status and to this end created the Union of Protestant Churches of Alsace and Lorraine (UEPAL). The National Council of Evangelicals of France brings together most of the Unions of Evangelical Churches, including the Pentecostal churches, particularly the Assemblies of God. Some of the Unions of Churches members of the CNEF are also members of the French Protestant Federation. Years of conciliation work between the Reformed Church of France (ERF) and the Lutheran Evangelical Church of France (EELF) resulted in the union made possible by the Leuenberg Concord of 1973. There is now only one Church: the United Protestant Church of France, Lutheran and Reformed communion. Article : French Reformed Church
Your kidneys may be small, but they perform many vital functions that help maintain your overall health, including filtering waste and excess fluids from your blood. Serious kidney disease may lead to complete kidney failure and the need for dialysis treatments or a kidney transplant to stay alive. While effective treatments are available for many kidney diseases, people are sometimes unaware that kidney disease can often be prevented. The following are the ten major causes of kidney disease. In the United States the two leading causes of kidney failure, also called end stage kidney disease or ESRD, are diabetes (also called Type 2, or adult onset diabetes) and high blood pressure. When these two diseases are controlled by treatment, the associated kidney disease can often be prevented or slowed down. Many effective drugs are available to treat high blood pressure. In addition, healthy lifestyle changes, such as losing weight and regular exercise, often help to control, and may even help to prevent, high blood pressure. Careful control of blood sugar in diabetics helps to prevent such complications as kidney disease, coronary heart disease and stroke. When diabetics have associated high blood pressure, special drugs called angiotensin converting enzyme (ACE) inhibitors may help to protect their kidney function. The third leading cause of end stage kidney disease in the U.S. is glomerulonephritis, a disease that damages the kidneys' filtering units, called the glomeruli. In many cases, the cause of this disease is not known, but some cases may be inherited and others may be triggered by an infection. Some of the other diseases that may affect the kidneys include infections, kidney stones and inherited diseases such as polycystic kidney disease. The kidneys can also be damaged by overuse of some over-the-counter pain killers and by taking illegal drugs such as heroin. Some of these diseases can be cured. In other cases, treatments can help to slow the disease and prolong life. End stage kidney disease occurs when about 90 percent of kidney function has been lost. People with kidney failure may experience nausea, vomiting, weakness, fatigue, confusion, difficulty concentrating and loss of appetite. It can be diagnosed by blood and urine tests. If you would like more information, please contact us. © 2015 National Kidney Foundation. All rights reserved. This material does not constitute medical advice. It is intended for informational purposes only. Please consult a physician for specific treatment recommendations.
Welcome to the Home page of Great American Documents. This website presents the full wordage from many of the documents and speeches that helped shape American history. You are on the Documents index page. To read documents click on the title below or use the menu on the left. Please enjoy them and be inspired . . . A selection of great American speeches is also available. To access them, click here or use the menu item on the left. Print Friendly: Pages are set up for neatly styled printouts that hide most non-essential elements. Also known as the Gayanashagowa and as The Great Binding Law, the Iroquois Constitution, originally oral, was the founding document of the Iroquois Confederacy and a forerunner to colonist democratic principals. Importantly, it is believed to have influenced concepts in the United States Constitution, and many historians place the Iroquois Constitution alongside the Mayflower Compact and the Fundamental Orders as the most important early New World governing documents. Initially five tribes, the Mohawk, Oneida, Seneca, Cuyuga and Onondaga constituted the confederacy. Under the constitution, each tribe was allowed a certain number of representatives in a body called the Great Council of Sachems. They ceded certain powers to the council while reserving the power to handle issues involving the inner workings of their own tribe. Later the Tuscarora tribe moved into the area controlled by the Iroquois and became subject to the constitution as a non-voting member. The exact date of the Iroquois Constitution is unknown, or even the exact century, but some historians believe it to be as early as August 31, 1142 A.D., and codified on a series of wampum belts, while many others believe the date to be closer to 1451 or as late as 1525. The founder of the Iroquois Confederacy is believed to be Dekanawida, born near the Bay of Quinte in southeastern Ontario, Canada. His spokesman was a Mohawk tribal lord he named Hahyonhwatha (Hiawatha). One legend says Dekanawida had a speech impediment and needed Hiawatha to do his public speaking. Later in the nineteenth century Henry Wadsworth Longfellow used the latter's name in his famous poem, The Song of Hiawatha, but as a completely different character. The Mayflower Compact is the written covenant of the new settlers arriving at New Plymouth after crossing the Atlantic aboard the Mayflower. It is the first governing document of Plymouth Colony and established the first basis in the New World for written laws. Earlier New World settlements had failed due to a lack of government, and the compact was hashed out by the pilgrims, led by William Bradford, for the sake of their own survival. It was signed aboard ship November 11, 1620 OS (November 21, 1620 NS) by all 41 of the Mayflower’s adult male passengers. Many of the settlers were fleeing religious persecution in Europe, desiring the freedom to practice Christianity according to their own determination, while others were simply in search of commercial success. About half the colony failed to survive the first winter, but the remainder lived on and prospered. Also known as the Fundamental Orders of Connecticut, and consisting of a preamble and 11 orders (laws), the Fundamental Orders created a common government between three towns on the Connecticut River, Windsor, Hartford and Wethersfield, in modern day Connecticut. The Fundamental Orders are widely considered to be the first written constitution in the Western tradition and a forerunner of the modern form of representative government the United States has today. However, it should be noted that many historians cite the Iroquois Constitution, which also embraced representative government, as the first constitution in the New World. The Fundamental Orders were agreed to January 14, 1639 by the Connecticut Colony council meeting in Hartford, and was the basic law of the colony until 1662. Thomas Hooker, John Haynes and Roger Ludlow were most influential in framing the document which was transcribed into the official colony records by secretary Thomas Welles. Also known as the Duties in American Colonies Act, the Stamp Act was the Parliament of Great Britain's first attempt to impose a direct tax on the American colonies. For the first time the Americans would pay tax not to their own local legislatures, but directly to England. It was imposed to help defray growing expenses the Crown incurred from the Seven Years War (1756–1763) and administering and policing its vast new territories acquired in North America. To prove the tax was paid, the Stamp Act required the use of stamped paper for legal documents, diplomas, almanacs, broadsides, newspapers and playing cards. The Stamp Act was agreed to by the British Parliament by a large majority March 22, 1765 and was set to take effect November 1, 1765. However, the tax met great resistance in the colonies, including the passage of the Resolutions of the Stamp Act, and fueled a growing movement that eventually became the American Revolution. The Stamp Act was repealed March 18, 1766 as a matter of expedience, and against the objections of King George III (1760–1820). But that did not deter the resolve of either the British or the colonists. Also known as the Declaration of Rights and Grievances of the Stamp Act Congress, the Resolutions of the Stamp Act was passed in response to the Parliament of Great Britain's first attempt to impose a direct tax on the American colonies. For the first time Americans would pay tax not to their own local legislatures, but directly to England. To prove the tax was paid, the Stamp Act required the use of stamped paper for legal documents, diplomas, almanacs, broadsides, newspapers and playing cards. Set to take affect November 1, 1765, it met great resistance in the colonies and fueled a growing movement that eventually became the American Revolution. As part of this resistance a Stamp Act Congress was convened October 7, 1765 in New York City with representation from 9 of the 13 colonies, and the Resolutions of the Stamp Act, including 14 points, were agreed to October 19, 1765. The Stamp Act Congress met in the building that would become Federal Hall in New York City, which was also the first capitol of the United States and the site where George Washington took the oath of office as the first President. Also known as the Declaration of the Causes and Necessity of Taking Up Arms, the Declaration of Arms was a statement by the Second Continental Congress meeting in Philadelphia setting forth the causes and necessity of their taking up arms but, importantly, did not declare immediate independence. It was agreed to July 6, 1775 following the breakout of fighting at Lexington and Concord, and the battle of Bunker Hill. The Declaration of Arms is primarily a combination of the writing of Thomas Jefferson, John Dickinson and possibly John Rutledge. America's most cherished symbol of liberty, the Declaration of Independence was drafted by Thomas Jefferson with the assistance of John Adams, Roger Sherman, Benjamin Franklin and Robert R. Livingston. It announced to the world that the 13 American colonies, then at war with Great Britain for more than a year, were no longer part of the British Empire or under the rule of King George III (1760-1820), and provided a formal explanation for their actions. On June 7, 1776, at the Second Continental Congress meeting in Philadelphia, Richard Henry Lee introduced a resolution urging independence which was agreed to July 2, 1776. The Second Continental Congress then approved the Declaration of Independence two days later, July 4, 1776, and it was signed by most delegates August 2, 1776. Five delegates, Elbridge Gerry, Oliver Wolcott, Lewis Morris, Thomas McKean and Matthew Thornton, signed on a later date. Colonel John Nixon gave the first public reading of the Declaration of Independence July 8, 1776, to a crowd at Independence Square in Philadelphia. Also known as the Articles of Confederation and Perpetual Union, the Articles of Confederation was drafted by the same Second Continental Congress that passed the Declaration of Independence, and established a firm league of friendship between and among the 13 American states. Individual states retained sovereignty, freedom and independence, and instead of setting up executive and judicial branches of government, there was a national legislature composed of representatives from each state comprising the Congress of the Confederation (also known as the United States in Congress Assembled). The Congress was responsible for conducting foreign affairs, declaring war and peace, maintaining an army and navy and a variety of other smaller functions, however, the Articles of Confederation denied Congress power to collect taxes, regulate interstate commerce and enforce laws. The Articles of Confederation were submitted July 12, 1776, eight days after the Declaration of Independence, and agreed to November 15, 1777. Comprised of a preamble and 13 articles, they became the ruling document of the new nation after ratification by the last of the 13 American states, Maryland, March 1, 1781. In the end the Articles of Confederation failed because the national government had too little jurisdiction over states and individuals. As noted by George Washington, a government was established that was "little more than the shadow without the substance." Also known as the Paris Peace Treaty, the Treaty of Paris signed by American and British representatives ended the American Revolutionary War, recognized United States independence and granted the new country significant western territory. It was signed at the Hôtel de York September 3, 1783 with John Adams, Benjamin Franklin and John Jay representing the United States and David Hartley, a member of British Parliament, representing King George III (1760–1820). The Treaty of Paris was ratified by the Congress of the Confederation January 14, 1784 and by Great Britain April 9, 1784. Ratified versions were exchanged in Paris May 12, 1784. Also known as An Ordinance for the Government of the Territory of the United States, North-West of the River Ohio, and as the Freedom Ordinance, the Northwest Ordinance is considered one of the most significant achievements under the Articles of Confederation. It told the world that the land north of the Ohio River and east of the Mississippi would be settled and eventually become part of the United States. Daniel Webster said about the Northwest Ordinance, "We are accustomed to praise lawgivers of antiquity … but I doubt whether one single law of any lawgiver, ancient or modern, has produced the effects of more distinct, marked, and lasting character than the Ordinance of 1787." It provided for the creation of not less than three nor more than five states and, importantly, prohibited slavery in the new territory as well as guaranteed inhabitants a bill of rights and addressed education. The Northwest Ordinance was agreed to by the Congress of the Confederation July 13, 1787 and later reaffirmed with slight modifications August 7, 1789 under the new United States Constitution. The land area opened up by the Northwest Ordinance was based on lines originally laid out by Thomas Jefferson in his March 1, 1784 Report of Government for Western Lands. He wanted to divide the territory into ten states, two of which would be called Cheronesus and Metropotamia, both located in what is currently Michigan. Also known as the Constitution of the United States of America and commonly abbreviated as U.S. Constitution or US Constitution, the United States Constitution is the supreme law of the nation. It defines the three branches of the federal government, a legislative branch with a bicameral Congress, an executive branch led by the President and a judicial branch headed by the Supreme Court, and carefully outlines the powers and jurisdiction of each. The constitution also reserves numerous rights for the individual states and lays out the basic rights of citizens. A federal convention was convened in Philadelphia May 14, 1787 (known as the Philadelphia Convention, as the Constitutional Convention and as the Grand Convention at Philadelphia) to amend the failing Articles of Confederation, and a quorum of seven states was achieved May 25, 1787. Over the summer the delegates decided to abandon the old Articles and fashion a new government framework. The resulting constitution was agreed to September 17, 1787 and ratified June 21, 1788. Former British Prime Minister William E. Gladstone in 1887 said the United States Constitution was "the most wonderful work ever struck off at a given time by the brain and purpose of man." It had a preamble and seven articles and was later ratified by conventions in each the 13 American states. The United States Constitution is the oldest federal constitution still in existence and has been amended 27 times since ratification, the first ten amendments being known as the Bill of Rights. The first national Thanksgiving Day November 26, 1789 was established by George Washington as a way of "giving thanks" for the United States Constitution. The Bill of Rights is the name given to the first ten amendments of the United States Constitution. They limit the powers of the federal government and protect the rights of all citizens, residents and visitors on United States territory. They were introduced by James Madison to the first United States Congress June 8, 1789, and were agreed to September 25, 1789 as a series of 12 proposed constitutional amendments. After 10 of the 12 amendments were ratified by three-fourths of the state legislatures, with Virginia casting the deciding vote, the Bill of Rights came into effect December 15, 1791. The first newspaper appearance of the Bill of Rights that was offered to the states for ratification was the October 3, 1789 issue of the Gazette of the U.S. Also know as the Declaration of Sentiments and Resolutions, the Declaration of Sentiments for women’s rights follows the form of the United States Declaration of Independence, and called for equality with men before the law, in education and in employment. It was drafted by Elizabeth Cady Stanton and introduced at the first woman's rights convention held July 19–20, 1848 in Seneca Falls, New York. The Declaration of Sentiments was signed by 68 women and 32 men at the Seneca Falls Convention July 20, 1848. The Declaration of Sentiments is considered by many to be the most important document of the nineteenth-century American woman's movement. A great achievement was made by the women’s rights movement August 18, 1920 with the ratification of the 19th Amendment to the United States Constitution which extended nationally voting rights to the female gender. The Emancipation Proclamation issued by President Abraham Lincoln during the American Civil War is considered the most important act of his presidency. It declared freedom for all slaves in any state of the Confederate States of America that did not return to the Union by January 1, 1863. Lincoln announced his plans at a Cabinet meeting July 22, 1862 and issued a preliminary draft September 22, 1862. His warning was ignored by the confederate states, and Lincoln signed a final draft of the Emancipation Proclamation January 1, 1863, freeing the southern slaves forever. The original copy of the Emancipation Proclamation was sadly destroyed in the Chicago fire of 1871. Photographs of the document show it was primarily written in Lincoln's own hand. Also spelled out as the Thirteenth Amendment or Amendment XIII, the 13th Amendment to the United States Constitution officially ended slavery, and with limited exceptions, such as those convicted of a crime, prohibits involuntary servitude. The 13th Amendment completed legislation to abolish slavery in America begun with President Abraham Lincoln’s Emancipation Proclamation. By the time of the amendment, slavery only existed in five states, Delaware, Kentucky, Missouri, Maryland and New Jersey. The United States Senate voted 38 to 6 in favor April 8, 1864, but the House of Representatives was against adding the amendment to the constitution. However, Lincoln insisted on including support for it in the 1864 Republican Party platform, and the House finally agreed to the 13th Amendment January 31, 1865 on a vote of 119 to 56. It was ratified by the required three-fourths of the states and in force December 6, 1865, with Georgia casting the deciding vote. Also spelled out as the Fourteenth Amendment or Amendment XIV, the 14th Amendment to the United States Constitution was specifically intended to overrule Dred Scott v. John F. A. Sandford (Supreme Court, March 6, 1857) and guarantee American citizenship, civil liberties, due process and equal protection to former slaves and their descendants. The 14th Amendment was agreed to by the United States Senate June 8, 1866 and by the House of Representatives June 13, 1866. During a controversial ratification process it was rejected by most Southern states, but the amendment received support from the required three-fourths of the states July 9, 1868. In subsequent years the 14th Amendment, especially the equal protection clause, has been used numerous times by African Americans, women and other groups to advance rights under the law. In a 2005 Supreme Court case, Justice David Souter called it "the most significant structural provision adopted since the original framing [of the constitution]." Also spelled out as the Fifteenth Amendment or Amendment XV, the 15th Amendment to the United States Constitution prohibits states from denying voting rights to citizens based on race, color or previous condition of servitude (meaning slavery). It was specifically intended to guarantee suffrage to former male slaves and their male descendants. The 15th Amendment was agreed to by the United States House of Representatives February 25, 1869 and by the Senate the following day. It was ratified by the required three-fourths of the states and in force February 3, 1870. Also spelled out as the Nineteenth Amendment or Amendment XIX, the 19th Amendment to the United States Constitution prohibits states from denying voting rights to citizens based on gender, and was specifically intended to extend suffrage to women. The 19th Amendment followed a long battle by women to gain these rights dating back at least to the Declaration of Sentiments from the first woman's rights convention in Seneca Falls, New York in 1848. The 19th Amendment was agreed to by the United States Congress June 4, 1919, and ratified by the required three-fourths of the states and in force August 18, 1920. Tennessee cast the deciding vote when Assemblyman Harry T. Burn changed his vote. He said he was following the advice of his mother, Mrs. J. L. Burns of Niota, Tennessee, expressed in a letter he had in his pocket. It said, "Dear Son: Hurrah and vote for suffrage! Don't keep them in doubt! I notice some of the speeches against. They were bitter. I have been watching to see how you stood, but have not noticed anything yet. Don't forget to be a good boy and help Mrs. Catt [Carrie Chapman Catt] put the "rat" in ratification. Your mother." From the Declaration of Independence.
By Kriti Gupta The world has had its fair share of natural calamities and is well versed with the damage that they cause. However, not many are aware of solar storms because they are invisible to the naked eye . In 1859, a coronal mass ejection(a giant cloud of solar plasma) had hit the Earth’s magnetosphere causing one of the largest geomagnetic solar storms that the planet has ever seen. Famously known as the Carrington Event, it bred an interest in scientists to study solar storms. What are solar storms? Our sun is a huge body of molten gases that is constantly in a state of flux. Solar storms occur when the sun emits huge bursts of energy in the form of solar flares and coronal mass ejections (CME). These phenomena send a stream of electrical charges and magnetic fields toward the Earth at a speed of about three million miles per hour. Aurorae, which can be observed near the polar circles, are caused by the interaction between the charged particles from the sun with the atoms in the upper atmosphere of the earth. Solar storms are as powerful as billions of nuclear bombs and can severely damage our communication systems. Reduce, Reuse and Recycle: Low-Cost telescope in Ooty The GRAPES-3 experiment muon telescope, which is the largest in the world, is located in the Cosmic Ray Laboratory in Ooty. GRAPES-3 is designed to study, among other things, the sun as an accelerator of energetic particles thereby giving us an insight into the cause and effects of solar storms. This telescope is made up of 40 year old zinc-coated steel pipes, which were recycled for this purpose and is the world’s cheapest cosmic ray detector. These pipes were imported from Japan, where they were used for construction purposes. A team of Indian and Japanese scientists examined them for neutrinos, which are sub-atomic particles produced by high-energy interactions in the galaxy. The six metre long pipes were buried under one of the world’s deepest gold mines- the Kolar Gold fields. Eventually, around 7,500 pipes were transported to the laboratory which has a radio astronomy centre as well. Soon, the research on high-energy cosmic rays began when scientists started making muon sensors from discarded pipes. From defunct iron pipes to muon sensors The lab assistants open the pipes and clean them with high-pressure water jets. A 100-micron tungsten wire is inserted into the pipe and it is closed with airtight seals. The next step involves filling the pipe with a combination of methane and argon gas and an electric field is applied across it to make the sensor effective. These pipes are then assembled in rows, below two metres of concrete which acts as an absorber, to form a muon telescope. To avoid any leakages, the scientists modified a helium spray gun by attaching a seven cent injection syringe needle to the nozzle of the pipe. “Every day, we make ten such recycled pipes ready for our experiments. The plan was to make very sensitive sensors to detect the weakest of signals. We wanted to measure cosmic rays with higher sensitivity than ever done before”, said Atul Jain, a scientist at the facility. Innovation in space exploration: Learn from India India seems to have mastered the art of producing efficient technology at low costs. A shining example of this is India’s Mars Orbiter Mission, which was carried out on a shoestring budget of only $74 million, which was less than the $100 million budget for the Hollywood space thriller ‘Gravity’. There are a number of cosmic ray telescopes located around the world but none are as powerful as the one in Ooty. The computer software programs are locally developed, as is the cooling system. The telescope not only promotes recycling and reusing but can act as a strong support system against the adverse effects of solar storms. Solar explosions interfere with our technology and can damage aircraft autopilots, cause power outages and take us back to the Stone Age. Safeguards against the same are a must to ensure the continuity of human progress. Stay updated with all the insights. Navigate news, 1 email day. Subscribe to Qrius
Copy of Teaching Practice 16.03.2015 To provide clarification of comparatives and superlatives To give sts practice in comparing two and more things in using the target language -I will start the lesson by asking the sts this question; "What is a superhero?" -Then, I will ask the sts to describe a superhero with some adjectives. -When they will tell me the adjectives, I will write them on the board. However, while I am doing this, I will lead the students to the adjectives I want them to tell me such as rich, strong, helpful etc. The reason for this is that I will compare the three superheroes by using these adejctives.So, there will be all types of adjectives on the board. Hence, I can use them to show all the rules later. -After that, I will focus on just two of the superheroes; Batman and Spiderman. I will put their pictures on the board. -Then, we will compare them by using the adjectives which we have written on the board. -After writing all the sentences on the board, I will show them how to change each adjective to make comparatives through these sentences. -Now, I will show them the picture of the third superhero; Ironman and ask them "How can we compare these three superheroes now?" -When they remember superlatives, we will make sentences by using the same adjectives. -I will write the sentences on the board and show them the rules of the adjectives to make superlative sentences through these sentences again. -As a result, they will be able to understand how to use comparatives and superlatives. -After giving all the rules and making sure the sts understand the topic, I will give a handout to check if they really understand or not. -There are pictures of three means of transport (bike, car and plane) on the paper and some details are given about them such as their model, price, etc. -The sts will work in pairs to write 3 comparative 3 superlative sentences by using the information on the paper. -After they finish, we will discuss the sentences together.
Difference Between 32-Bit vs 64-Bit operating system In computer architecture 32-bit integers, memory addresses and data units are used. 64-bit computing makes use of processors that specify different data path widths, integer size, and memory addresses which have a width of 64-bits. They are the central processing unit for any computer. It also specifies the driver and software program which utilizes the particular architecture. There is different software that supports both these architectures and choosing does make a difference if the two were programmed for different systems. The 32-bit hardware and software are often referred to as x86 or x86-32. The 64-bit hardware and software are referred to as x64 or x86-64. Let’s have a look at other differences between the 32-Bit vs 64-Bit operating system in detail. What is 32 bit? In computer systems, 32-bit refers to the fraction of bits that can be transmitted or processed in parallel. In other words, 32-bits the number of bits that constitute a data element. A 32-bit register can store 232 various values. The range of integer values that can be saved in 32 bits depends on the integer representation used. With the two most popular representations, the range is 0 through 4,294,967,295 (232 − 1) for representation as an (unsigned) binary number, and −2,147,483,648 (−231) through 2,147,483,647 (231 − 1) for representation as two’s complement.One significant consequence is that a processor with 32-bit memory addresses can immediately access at most 4 GiB of byte-addressable memory. Prominent 32-bit instruction set designs used in general-purpose computing include the IBM System/360 and IBM System/370 (which had 24-bit addressing) also the System/370-XA, ESA/370, and ESA/390 (which had 31-bit addressing), the DEC VAX, the NS320xx, the Motorola 68000 family (the initial two models of which had 24-bit addressing), the Intel IA-32 32-bit version of the x86 structure, and the 32-bit versions of the ARM, SPARC, MIPS, PowerPC and PA-RISC designs. 32-bit instruction set architectures used for embedded computing involve the 68000 family and Cold Fire, x86, ARM, MIPS, PowerPC, and Infineon TriCore designs. 32-bit usually refers to the state at which data is saved, read, and processed. When associated with operating systems and processors, this actually implies how many 1’s and 0’s are being managed to represent your data. The more bits that the system can process, the extra data that it can manage at once. What is 64 bit? 64-bit belongs to the number of bits that can be processed or transmitted in parallel or the number of bits used for individual elements in data formats. It also refers to word sizes that describe a particular class of computer architecture, buses, memory, and CPU. In computer design, the 64-bit indicates those 64-bit integers, memory addresses, or other data units that are at most 64 bits or 8 octets wide. In microprocessors, 64 bits means the width of a register. A 64-bit microprocessor is capable of processing memory addresses plus data represented by 64 bits. A 64-bit register stores 264 = 18 446 744 073 709 551 616 separate values.The name can also be used to indicate the dimension of low-level data types, such as 64-bit floating-point figures. Head To Head Comparison Between 32-Bit vs 64-Bit operating system (Infographics) Below is the top 4 difference between 32-Bit vs 64-Bit operating system Key Differences Between 32-Bit vs 64-Bit Operating System Both 32-Bit vs 64-Bit operating system is popular choices in the market. let us discuss some of the major Difference Between 32-Bit System vs 64-Bit System: - To just begin with the comparison and simply putting it we can say that a 64-bit processor is more capable than a 32-bit one. It can handle more data at once. In addition to this, it has the ability to store more data, storing more computational values, including memory addresses which helps in accessing approximately four billion times the physical memory that a 32-bit processor can access. - A 32-bit processor can easily handle a limited amount of RAM. This can be considered as 4GB. 64 bit systems, on the other hand, can access much more. It is important to have this as the operating system should be designed in a way by which it can access more memory. The basic version of operating systems has its limitations on RAM which can be utilized by applications. $GB is the maximum which can be utilized by a 32-bit system. The latest versions of the 64-bit system have the capability of increasing the capabilities of the processor. The applications like video games with high performance demand high memory and this is where 64-bit systems come out to be superior. - If you are a Windows user you would have noticed two folders in Program Files. One as Program files and the other as Program Files (x86). The 32-bit architecture though being old has been there for a long time. There are many applications that host and utilize 32-bit architecture. The new systems which have 64-bit systems have the capability of running 32-bit and 64-bit software together. Hence they have two different directories for both. When a 32-bit application is encountered it is moved to the x86 folder and the other folder when 64-bit is encountered. - By using a 64-bit system a lot of multitasking is possible. The user can easily switch between the different applications without any glitches. The games demanding high performance and the applications that consume a lot of memory can run easily on a 64-bit processor. - 32-bit processors are perfectly proficient in managing a limited amount of RAM (in Windows, 4GB or less), and 64-bit processors are capable of using much higher. - The least amount of RAM needed for a 64-bit Windows OS is 2 GB in contrast to 32-bit Windows which requires 1 GB RAM. It’s slightly evident because with large-sized registers more memory will be required. - A big difference among 32-bit processors and 64-bit processors is the number of calculations per second they can function, which influences the speed at which they can complete tasks. 64-bit processors can proceed in dual-core, quad-core, six-core, and eight-core versions for home computing. Multiple cores allow for an improved number of calculations per second that can be done, which can progress the processing power and help make a computer run faster. Software programs that need many calculations to function smoothly can operate quicker and more efficiently on the multi-core 64-bit processors, for the most part. - One point to remark is that 3D graphics program and games do not benefit much, if at all, from shifting to a 64-bit computer, unless the program is a 64-bit program. A 32-bit processor is adequate for any program addressed for a 32-bit processor. In the case of computer games, you’ll get much more performance by upgrading the video card rather than getting a 64-bit processor. - In the end, 64-bit processors are becoming more and more common in home computers. Most manufacturers develop computers with 64-bit processors due to lower costs and because more users are presently using 64-bit operating systems and programs. Computer components retailers are offering fewer and fewer 32-bit processors and soon may not offer any at all. 32-Bit vs 64-Bit Operating System Comparison Table Below is the topmost comparison between 32-Bit vs 64-Bit operating system: |The Basis Of Comparison between 32-Bit vs 64-Bit Operating System|| |Architecture||The 32-bit system has general computing which includes IBM System/360 and IBM System/370, the DEC VAX, the Motorola 68000 Family, the Intel IA-32, 32-bit version of x86 architecture are the different versions. These are architectures which are used for embedded computing and include 68000 families.||The registers are divided into different groups like integer, floating, control and often for addresses of various uses and names like address, index or base registers. The size of these registers is dependent on the amount of addressable memory.| |Hardware||A 32-bit system consists of a 32-bit register. This register is capable of storing 232 or 4,294,967,296 values. A 32-bit system can address up to 4GB of RAM. The actual can be thought of as 3.5 GB. This is because a part of registry stores temporary values with the memory addresses.||A 64-bit system consists of 64-bit register which is capable of holding 264 or 18,446,744,073,709,551,616 values. A 64-bit register has the capability of addressing around 16 exabytes of memory. It can clearly access more than 4GB of RAM. If a computer has 16GB RAM then it is better than the system is a 64-bit system. 64-bit systems remove all bottlenecks that are present in 32-bit systems. They run more efficiently and have data paths and memory blocks already allocated.| |Software||32-bit programs are compatible with 64-bit systems. But vice versa is not possible. The software is also built for 32-bit systems but is seldom used. It is possible to install a 32-bit system on a 64-bit system. There are utilities or anti-virus software which are specifically written for 32-bit systems. It is advisable to download the ones which correspond to your system. Also, device drivers are also written for specific operating systems and hence it is important to install a 32-bit for its corresponding 32 drivers.||Backward compatibility is not supported by 64-bit systems. This is because 64-bit instructions cannot be recognized by a 32-bit processor. All new systems are 64-bit versions of Windows and OS X. The 64-bit version allows accessing more RAM than 32-bit.| |Calculations per second||32-bit systems have dual-core and quad-core versions available.||64bit systems can come with dual-core, quad-core, six core, and eight core versions. Having these multiple cores available has increased its speed of calculations per second.| If you are installing any operating system it is very important to know the type of processor your computer has and make sure that you install the right one. With this, it is also important that you know the type of operating system your computer is running. Most modern-day systems have 64-bit processors that provide better performance from many aspects. They provide better memory utilization, speedy functioning of the system. They also have greater memory utilization when compared to 32-bit processors. But in some cases, there will not be 64-bit drivers and that is when a 32-bit system can come to your rescue. It is the best option to buy a 64-bit operating system with 64-bit applications that help in providing the best performance. 4.6 (3,144 ratings) In short, the principal contrast among the 32-bit and 64-bit is that 64-bit is capable of processing much greater designs. Due to the amount of data volume processed and produced by 64-bit, the approach to the resulting approach has changed. Nevertheless, if you plan on having less than 3Gb of RAM, have an older computer, or a 32-bit processor, usually recommend a 32-bit system. This has been a guide to the top difference between the 32-Bit vs 64-Bit operating system. Here we also discuss the 32-Bit vs 64-Bit operating system key differences with infographics and comparison table. You may also have a look at the following articles to learn more –
The Metropolis is indeed an ancient city, but this would not explain why it looks like a city of a million square feet, and not another of the great ancient cities of Greek or Roman antiquity. There are many theories for what caused this. For instance, there was an event known as the fall of Atlantis that happened somewhere between the second century BCE and the fourth century CE. What do we find in all this ancient knowledge? Most Greek and Roman monuments are much less complex and, in many cases, almost impossible to identify. The Metropolis is a monument that dates to the middle ages. It is a great stone monument of the Iron Age, an ancient city built in the form of a walled district. The ancient Greek writers were far more detailed with the monuments they created and did not only discuss the nature of the walls, but they spoke about a great many other buildings of this great civilization. I will mention just one of the other famous monuments, the great pyramid or pyramids built by the Sumerians. This great pyramid, known as the Great Pyramid, is in Iraq, and was located on the south side of the mountain known as Mount Sinir.
In early history, fires were implicit fire pits in the focal point of rooms to give warmth to structures and homes; however, the early Romans started utilizing chimneys to deliver heat and to cook inside. Ventilation happened when pastry kitchens introduced tubes in the dividers to allow smoke to go out. The earliest indications of real chimney use dates to the thirteenth century in Italy yet wasn’t until the fifteenth and sixteenth century that they took on a cosmetic appeal with residents building large and tall structures that could make up for bigger homes and various chimneys. During this period, the requirement for chimney experts expanded. Coal developed in ubiquity as an elective fuel replacement for wood. Since coal left enormous clingy stores of buildup on the dividers of the chimney and vent, the requirement for more successive cleaning of the chimney and pipes expanded. If not cleaned regularly, the buildup would start to block the flow of air up the smokestacks constraining unsafe and harmful exhaust into the structures and homes. After the Great Fire of London, which started on September 2nd and lasted until September 5th of 1666, new structure guidelines requested smaller fireplaces, and expert stack cleaners began recruiting little youngsters, some as youthful as four, to shimmy up smokestacks and wipe out the residue. In addition to the fact that this was an evil paying activity, it was likewise perilous, and scopes could stall out inside the stack or could choke in a haze of debris. A young boy was traditionally purchased from his poverty-stricken parents by a master sweep, who would so-called “apprentice” the child; but what actually occurred was that the child became, in essence, a slave who did not have a realistic opportunity to advance in life. Children who worked as sweeps rarely lived past middle age. In February of 1875, a 12-year-old chimney sweeper named George Brewster got stuck in a hospital stack, ultimately causing him to parish. Brewster’s passing turned out to be essential for a forceful mission. A bill was pushed through Parliament in September 1875 which shut down the act of utilizing youngsters as human chimney cleaners in England. Today, chimney sweeps accomplish more than just cleaning a smokestack: they analyze and determine issues, fix a wide range of stacks, and introduce fireplaces and hearths. Proficient chimney sweeps are taught in the codes and science behind stacks and chimneys. The chimney sweep is a very much respected part that assists with giving property holders and organizations to upkeep a safe activity of warming your home or job, chimneys, ovens, vents, and fireplaces of numerous sorts.
Presentation on theme: "Effects of Monsoons #1 Geographic Effects. Objective By the time you finish this lesson you should be able to identify and describe the geographic effects."— Presentation transcript: Effects of Monsoons #1 Geographic Effects Objective By the time you finish this lesson you should be able to identify and describe the geographic effects of monsoon rains on South Asia Based upon what you learn here, you should be able to predict similar effects in different regions that also experience monsoon rains, such as Southeast Asia and East Asia Why Knowing Effects is Important South Asia East Asia Southeast Asia Vocabulary Monsoon – seasonal winds that produce a yearly pattern of rainfall Floodplain – the areas near a river that are covered by water when the river floods Erosion – the gradual wearing away of rock, usually by wind and water Sediment – particles of sand, silt and clay that are transported downstream by rivers; these are deposited in the river channel and its floodplain Disclaimer This lesson is calling these effects “geographic”, but many are also social and economic effects because they affect people and the way that people satisfy their wants and needs so there is a lot of overlap in how we classify them For example, if a river overflows its banks due to heavy runoff from rain, this is a geographic effect. If those flood waters wash away someone’s farm, then it also has a social and economic effect Start with What You Know Describe the water cycle Precipitation and What Comes After Precipitation is “water that falls from the sky” (rain, snow, sleet, hail, etc. – it’s all 2H 2 O) What happens to the water after it falls out of the sky? For solid water, it just stays where it is until it becomes liquid (melts) If it stays in place long enough and there is enough of it, it is called a glacier For liquid water, some is absorbed by the soil, some evaporates back into the air, and some runs off into streams and rivers Absorption Precipitation that is absorbed by the soil is used by plants and a lot will eventually evaporate back into the air. Any excess sinks down through the ground until it reaches a layer of rock that it cannot pass through When it reaches that layer of rock it either sits there or flows along (very slowly – feet or meters per year); this is called an aquifer. Remember that word – you’ll need it next year. If the water reaches the surface again, it is called a spring Water from springs runs off just like excess rain Runoff Runoff is precipitation that is not absorbed or evaporated The heavier the rain, the more runoff that is produced The runoff goes into streams and rivers, causing them to fill up If there is enough runoff, the streams and rivers overflow their banks – this is called “flooding” Rivers in areas that have monsoon rains also have annual floods Runoff – It’s not Just Water Water that runs off also causes erosion – it picks up particles of sand, silt and clay from the ground that it passes over; it also picks up loose plant matter The particles and plant matter are carried along by the river or stream and will eventually settle to the bottom of the river or stream When the river floods, then the plant matter and particles will also settle out onto the floodplain, fertilizing the soil of the floodplain Soil Fertility Affects Population (People Live Where the Food Is) OK – it affects trains, but only because trains go where the passengers are and passengers are people, too. Effects - Economic When the monsoons arrive on time and bring sufficient rain, farmers are able to produce enough food for the population. When the monsoons arrive too early or too late, the plants do not get the water that they need at the time that they need it, so the harvest is poor If the monsoons bring too much rain, farms and fields get damaged by flooding. If the monsoons do not bring enough rain, the plants do not get the water they need. Effects - Social In the modern world, the social effects of poor monsoons are not as bad as they were in earlier times. Poor monsoons used to mean famine (not enough food for everyone). Today, a poor monsoon means higher prices for food, but few people actually starve. A heavy monsoon, even today, causes destructive flooding, contaminated water, and other problems. Effects - Political India is a democratic republic. It calls itself the “world’s largest democracy”. When food and fuel prices rise dramatically due to abnormal monsoons, people expect their government to do something to fix or control the problem. They also expect the government to fix roads, buildings and other infrastructure that get damaged by destructive flooding. But fixing these kinds of problems requires the government to spend money that it may not have, resulting in higher taxes.
In common law legal systems, precedent is a principle or rule established in a previous legal case that is either binding on or persuasive for a court or other tribunal when deciding subsequent cases with similar issues or facts. Common-law legal systems place great value on deciding cases according to consistent principled rules, so that similar facts will yield similar and predictable outcomes, and observance of precedent is the mechanism by which that goal is attained. The principle by which judges are bound to precedents is known as stare decisis. Common-law precedent is a third kind of law, on equal footing with statutory law (that is, statutes and codes enacted by legislative bodies) and delegated legislation (in U.K. parlance) or regulatory law (in U.S. parlance) (that is, regulations promulgated by executive branch agencies). Case law, in common-law jurisdictions, is the set of decisions of adjudicatory tribunals or other rulings that can be cited as precedent. In most countries, including most European countries, the term is applied to any set of rulings on law, which is guided by previous rulings, for example, previous decisions of a government agency. Essential to the development of case law is the publication and indexing of decisions for use by lawyers, courts, and the general public, in the form of law reports. While all decisions are precedent (though at varying levels of authority as discussed throughout this article), some become "leading cases" or "landmark decisions" that are cited especially often. - 1 Principle - 2 Categories and classifications of precedent, and effect of classification - 2.1 Verticality - 2.2 Horizontality - 2.3 Federalism and parallel state and federal courts - 2.4 Binding precedent - 2.5 Persuasive precedent - 2.6 Nonprecedential decisions: unpublished decisions, non-publication and depublication, noncitation rules - 2.7 Res judicata, claim preclusion, collateral estoppel, issue preclusion, law of the case - 2.8 Splits, tensions - 2.9 Matter of first impression - 3 Contrasting role of case law in common law, civil law, and mixed systems - 4 Critical analysis - 5 Application - 6 Rules of statutory interpretation - 7 Practical application - 8 See also - 9 Notes - 10 External links Stare decisis (/ - A decision made by a superior court, or by the same court in an earlier decision, is binding precedent that the court itself and all its inferior courts must follow. - A court may overturn its own precedent, but should do so only if a strong reason exists to do so, and even in that case, should be guided by principles from superior, lateral, and inferior courts. Case law in common-law systems In the common-law tradition, courts decide the law applicable to a case by interpreting statutes and applying precedent, which record how and why prior cases have been decided. Unlike most civil-law systems, common-law systems follow the doctrine of stare decisis, by which most courts are bound by their own previous decisions in similar cases, and all lower courts should make decisions consistent with previous decisions of higher courts. For example in England, the High Court and the Court of Appeal are each bound by their own previous decisions, but the Supreme Court of the United Kingdom is able to deviate from its earlier decisions, although in practice it rarely does so. Generally speaking, higher courts do not have direct oversight over day-to-day proceedings in lower courts, in that they cannot reach out on their own initiative (sua sponte) at any time to reverse or overrule decisions of the lower courts. Normally, the burden rests with litigants to appeal rulings (including those in clear violation of established case law) to the higher courts. If a judge acts against precedent and the case is not appealed, the decision will stand. A lower court may not rule against a binding precedent, even if the lower court feels that the precedent is unjust; the lower court may only express the hope that a higher court or the legislature will reform the rule in question. If the court believes that developments or trends in legal reasoning render the precedent unhelpful, and wishes to evade it and help the law evolve, the court may either hold that the precedent is inconsistent with subsequent authority, or that the precedent should be "distinguished: by some material difference between the facts of the cases. If that decision goes to appeal, the appellate court will have the opportunity to review both the precedent and the case under appeal, perhaps overruling the previous case law by setting a new precedent of higher authority. This may happen several times as the case works its way through successive appeals. Lord Denning, first of the High Court of Justice, later of the Court of Appeal, provided a famous example of this evolutionary process in his development of the concept of estoppel starting in the High Trees case: Central London Property Trust Ltd v. High Trees House Ltd K.B. 130. Judges may refer to various types of persuasive authority to reach a decision in a case. Widely cited nonbinding sources include legal encyclopedias such as Corpus Juris Secundum and Halsbury's Laws of England, or the published work of the Law Commission or the American Law Institute. Some bodies are given statutory powers to issue guidance with persuasive authority or similar statutory effect, such as the Highway Code. In federal or multijurisdictional law systems, conflicts may exist between the various lower appellate courts. Sometimes these differences may not be resolved and distinguishing how the law is applied in one district, province, division or appellate department may be necessary. Usually, only an appeal accepted by the court of last resort will resolve such differences, and for many reasons, such appeals are often not granted. Any court may seek to distinguish its present case from that of a binding precedent, to reach a different conclusion. The validity of such a distinction may or may not be accepted on appeal. An appellate court may also propound an entirely new and different analysis from that of junior courts, and may or may not be bound by its own previous decisions, or in any case may distinguish the decisions based on significant differences in the facts applicable to each case. Or, a court may view the matter before it as one of "first impression", not governed by any controlling precedent. When various members of a multi-judge court write separate opinions, the reasoning may differ; only the ratio decidendi of the majority becomes binding precedent. For example, if a 12-member court splits 5-2-3-2 in four different opinions on several different issues, whatever reasoning commands seven votes on each specific issue, and the seven-judge majorities may differ issue-to-issue. All may be cited as persuasive (though of course opinions that concur in the majority result are more persuasive than dissents). Quite apart from the rules of precedent, the weight actually given to any reported opinion may depend on the reputation of both the court and the judges with respect to the specific issue. For example, in the United States, the Second Circuit (New York and surrounding states) is especially respected in commercial and securities law, the Seventh Circuit (in Chicago), especially Judge Posner, is highly regarded on antitrust, and the District of Columbia Circuit is highly regarded on administrative law, Categories and classifications of precedent, and effect of classification Generally, a common law court system has trial courts, intermediate appellate courts and a supreme court. The inferior courts conduct almost all trial proceedings. The inferior courts are bound to obey precedent established by the appellate court for their jurisdiction, and all supreme court precedent. The Supreme Court of California's explanation of this principle is that [u]nder the doctrine of stare decisis, all tribunals exercising inferior jurisdiction are required to follow decisions of courts exercising superior jurisdiction. Otherwise, the doctrine of stare decisis makes no sense. The decisions of this court are binding upon and must be followed by all the state courts of California. Decisions of every division of the District Courts of Appeal are binding upon all the justice and municipal courts and upon all the superior courts of this state, and this is so whether or not the superior court is acting as a trial or appellate court. Courts exercising inferior jurisdiction must accept the law declared by courts of superior jurisdiction. It is not their function to attempt to overrule decisions of a higher court. An Intermediate state appellate court is generally bound to follow the decisions of the highest court of that state. The application of the doctrine of stare decisis from a superior court to an inferior court is sometimes called vertical stare decisis. The idea that a judge is bound by (or at least should respect) decisions of earlier judges of similar or coordinate level is called horizontal stare decisis. In the United States federal court system, the intermediate appellate courts are divided into thirteen "circuits," each covering some range of territory ranging in size from the District of Columbia alone up to seven states. Each panel of judges on the court of appeals for a circuit is bound to obey the prior appellate decisions of the same circuit. Precedent of a United States court of appeals may be overruled only by the court en banc, that is, a session of all the active appellate judges of the circuit, or by the United States Supreme Court, not simply by a different three-judge panel. When a court binds itself, this application of the doctrine of precedent is sometimes called horizontal stare decisis. The state of New York has a similar appellate structure as it is divided into four appellate departments supervised by the final New York Court of Appeals. Decisions of one appellate department are not binding upon another, and in some cases the departments differ considerably on interpretations of law. Federalism and parallel state and federal courts In federal systems the division between federal and state law may result in complex interactions. In the United States, state courts are not considered inferior to federal courts but rather constitute a parallel court system. - When a federal court rules on an issue of state law, the federal court must follow the precedent of the state courts, under the Erie doctrine. If an issue of state law arises during a case in federal court, and there is no decision on point from the highest court of the state, the federal court must either attempt to predict how the state courts would resolve the issue by looking at decisions from state appellate courts, or, if allowed by the constitution of the relevant state, submit the question to the state's courts. - On the other hand, when a state court rules on an issue of federal law, the state court is bound only by rulings of the Supreme Court, but not by decisions of federal district or circuit courts of appeals However some states have adopted a practice of considering themselves bound by rulings of the court of appeals embracing their states, as a matter of comity rather than constitutional obligation. In practice, however, judges in one system will almost always choose to follow relevant case law in the other system to prevent divergent results and to minimize forum shopping. Precedent that must be applied or followed is known as binding precedent (alternately metaphorically precedent, mandatory or binding authority, etc.). Under the doctrine of stare decisis, a lower court must honor findings of law made by a higher court that is within the appeals path of cases the court hears. In state and federal courts in the United States of America, jurisdiction is often divided geographically among local trial courts, several of which fall under the territory of a regional appeals court. All appellate courts fall under a highest court (sometimes but not always called a "supreme court"). By definition, decisions of lower courts are not binding on courts higher in the system, nor are appeals court decisions binding on local courts that fall under a different appeals court. Further, courts must follow their own proclamations of law made earlier on other cases, and honor rulings made by other courts in disputes among the parties before them pertaining to the same pattern of facts or events, unless they have a strong reason to change these rulings (see Law of the case re: a court's previous holding being binding precedent for that court). In law, a binding precedent (also known as a mandatory precedent or binding authority) is a precedent which must be followed by all lower courts under common law legal systems. In English law it is usually created by the decision of a higher court, such as the Supreme Court of the United Kingdom, which took over the judicial functions of the House of Lords in 2009. In Civil law and pluralist systems precedent is not binding but case law is taken into account by the courts. Binding precedent relies on the legal principle of stare decisis. Stare decisis means to stand by things decided. It ensures certainty and consistency in the application of law. Existing binding precedent from past cases are applied in principle to new situations by analogy. One law professor has described mandatory precedent as follows: - Given a determination as to the governing jurisdiction, a court is "bound" to follow a precedent of that jurisdiction only if it is directly in point. In the strongest sense, "directly in point" means that: (1) the question resolved in the precedent case is the same as the question to be resolved in the pending case, (2) resolution of that question was necessary to the disposition of the precedent case; (3) the significant facts of the precedent case are also presented in the pending case, and (4) no additional facts appear in the pending case that might be treated as significant. In extraordinary circumstances a higher court may overturn or overrule mandatory precedent, but will often attempt to distinguish the precedent before overturning it, thereby limiting the scope of the precedent. Under the U.S. legal system, courts are set up in a hierarchy. At the top of the federal or national system is the Supreme Court, and underneath are lower federal courts. The state court systems have hierarchy structures similar to that of the federal system. The U.S. Supreme Court has final authority on questions about the meaning of federal law, including the U.S. Constitution. For example, when the Supreme Court says that the First Amendment applies in a specific way to suits for slander, then every court is bound by that precedent in its interpretation of the First Amendment as it applies to suits for slander. If a lower court judge disagrees with a higher court precedent on what the First Amendment should mean, the lower court judge must rule according to the binding precedent. Until the higher court changes the ruling (or the law itself is changed), the binding precedent is authoritative on the meaning of the law. Lower courts are bound by the precedent set by higher courts within their region. Thus, a federal district court that falls within the geographic boundaries of the Third Circuit Court of Appeals (the mid-level appeals court that hears appeals from district court decisions from Delaware, New Jersey, Pennsylvania, and the Virgin Islands) is bound by rulings of the Third Circuit Court, but not by rulings in the Ninth Circuit (Alaska, Arizona, California, Guam, Hawaii, Idaho, Montana, Nevada, Northern Mariana Islands, Oregon, and Washington), since the Circuit Courts of Appeals have jurisdiction defined by geography. The Circuit Courts of Appeals can interpret the law how they want, so long as there is no binding Supreme Court precedent. One of the common reasons the Supreme Court grants certiorari (that is, they agree to hear a case) is if there is a conflict among the circuit courts as to the meaning of a federal law. There are three elements needed for a precedent to work. Firstly, the hierarchy of the courts needs to be accepted, and an efficient system of law reporting. 'A balance must be struck between the need on one side for the legal certainty resulting from the binding effect of previous decisions, and on the other side the avoidance of undue restriction on the proper development of the law (1966 Practice Statement (Judicial Precedent) by Lord Gardiner L.C.)'. Binding precedent in English law Judges are bound by the law of binding precedent in England and Wales and other common law jurisdictions. This is a distinctive feature of the English legal system. In Scotland and many countries throughout the world, particularly in mainland Europe, civil law means that judges take case law into account in a similar way, but are not obliged to do so and are required to consider the precedent in terms of principle. Their fellow judges' decisions may be persuasive but are not binding. Under the English legal system, judges are not necessarily entitled to make their own decisions about the development or interpretations of the law. They may be bound by a decision reached in a previous case. Two facts are crucial to determining whether a precedent is binding: - The position in the court hierarchy of the court which decided the precedent, relative to the position in the court trying the current case. - Whether the facts of the current case come within the scope of the principle of law in previous decisions. In a conflict of laws situation, jus cogens erga omnes norms and principles of the common law such as in the Universal Declaration of Human Rights, to a varying degree in different jurisdictions, are deemed overriding which means they are used to "read down" legislation, that is giving them a particular purposive interpretation, for example applying European Court of Human Rights jurisprudence of courts (case law). "Super stare decisis" "Super stare decisis" is a term used for important precedent that is resistant or immune from being overturned, without regard to whether correctly decided in the first place. It may be viewed as one extreme in a range of precedential power, or alternatively, to express a belief, or a critique of that belief, that some decisions should not be overturned. In 1976, Richard Posner and William Landes coined the term "super-precedent" in an article they wrote about testing theories of precedent by counting citations. Posner and Landes used this term to describe the influential effect of a cited decision. The term "super-precedent" later became associated with different issue: the difficulty of overturning a decision. In 1992, Rutgers professor Earl Maltz criticized the Supreme Court's decision in Planned Parenthood v. Casey for endorsing the idea that if one side can take control of the Court on an issue of major national importance (as in Roe v. Wade), that side can protect its position from being reversed "by a kind of super-stare decisis". The controversial idea that some decisions are virtually immune from being overturned, regardless of whether they were decided correctly in the first place, is the idea to which the term "super-stare decisis" now usually refers. The concept of super-stare decisis (or "super-precedent") was mentioned during the interrogations of Chief Justice John Roberts and Justice Samuel Alito before the Senate Judiciary Committee. Prior to the commencement of the Roberts hearings, the chair of that committee, Senator Arlen Specter of Pennsylvania, wrote an op/ed in The New York Times referring to Roe as a "super-precedent". He revisited this concept during the hearings, but neither Roberts nor Alito endorsed the term or the concept. Persuasive precedent (also persuasive authority) is precedent or other legal writing that is not binding precedent but that is useful or relevant and that may guide the judge in making the decision in a current case. Persuasive precedent includes cases decided by lower courts, by peer or higher courts from other geographic jurisdictions, cases made in other parallel systems (for example, military courts, administrative courts, indigenous/tribal courts, state courts versus federal courts in the United States), statements made in dicta, treatises or academic law reviews, and in some exceptional circumstances, cases of other nations, treaties, world judicial bodies, etc. In a "case of first impression", courts often rely on persuasive precedent from courts in other jurisdictions that have previously dealt with similar issues. Persuasive precedent may become binding through its adoption by a higher court. A lower court's opinion may be considered as persuasive authority if the judge believes they have applied the correct legal principle and reasoning. Higher courts in other circuits A court may consider the ruling of a higher court that is not binding. For example, a district court in the United States First Circuit could consider a ruling made by the United States Court of Appeals for the Ninth Circuit as persuasive authority. Courts may consider rulings made in other courts that are of equivalent authority in the legal system. For example, an appellate court for one district could consider a ruling issued by an appeals court in another district. Statements made in obiter dicta Courts may consider obiter dicta in opinions of higher courts. Dicta of a higher court, though not binding, will often be persuasive to lower courts. The phrase obiter dicta is usually translated as "other things said", but due to the high number of judges and individual concurring opinions, it is often hard to distinguish from the ratio decidendi (reason for the decision). For these reasons, the obiter dicta may often be taken into consideration by a court. A litigant may also consider obiter dicta if a court has previously signaled that a particular legal argument is weak and may even warrant sanctions if repeated. A case decided by a multijudge panel could result in a split decision. While only the majority opinion is considered precedential, an outvoted judge can still publish a dissenting opinion. Common patterns for dissenting opinions include: - an explanation of how the outcome of the case might be different on slightly different facts, in an attempt to limit the holding of the majority - planting seeds for a future overruling of the majority opinion A judge in a subsequent case, particularly in a different jurisdiction, could find the dissenting judge's reasoning persuasive. In the jurisdiction of the original decision, however, a judge should only overturn the holding of a court lower or equivalent in the hierarchy. A district court, for example, could not rely on a Supreme Court dissent as a basis to depart from the reasoning of the majority opinion. However, lower courts occasionally cite dissents, either for a limiting principle on the majority, or for propositions that are not stated in the majority opinion and not inconsistent with that majority, or to explain a disagreement with the majority and to urge reform (while following the majority in the outcome). Treatises, restatements, law review articles Courts may consider the writings of eminent legal scholars in treatises, restatements of the law, and law reviews. The extent to which judges find these types of writings persuasive will vary widely with elements such as the reputation of the author and the relevance of the argument. Persuasive effect of decisions from other jurisdictions The courts of England and Wales are free to consider decisions of other jurisdictions, and give them whatever persuasive weight the English court sees fit, even though these other decisions are not binding precedent. Jurisdictions that are closer to modern English common law are more likely to be given persuasive weight (for example Commonwealth states such as Canada, Australia, or New Zealand). Persuasive weight might be given to other common law courts, such as from the United States, most often where the American courts have been particularly innovative, e.g. in product liability and certain areas of contract law. In the United States, in the late 20th and early 21st centuries, the concept of a U.S. court considering foreign law or precedent has been considered controversial by some parties. The Supreme Court splits on this issue. This critique is recent, as in the early history of the United States, citation of English authority was ubiquitous. One of the first acts of many of the new state legislatures was to adopt the body of English common law into the law of the state. See here. Citation to English cases was common through the 19th and well into the 20th centuries. Even in the late 20th and early 21st centuries, it is relatively uncontroversial for American state courts to rely on English decisions for matters of pure common (i.e. judge-made) law. Within the federal legal systems of several common-law countries, and most especially the United States, it is relatively common for the distinct lower-level judicial systems (e.g. state courts in the United States and Australia, provincial courts in Canada) to regard the decisions of other jurisdictions within the same country as persuasive precedent. Particularly in the United States, the adoption of a legal doctrine by a large number of other state judiciaries is regarded as highly persuasive evidence that such doctrine is preferred. A good example is the adoption in Tennessee of comparative negligence (replacing contributory negligence as a complete bar to recovery) by the 1992 Tennessee Supreme Court decision McIntyre v. Balentine (by this point all US jurisdictions save Tennessee, five other states, and the District of Columbia had adopted comparative negligence schemes). Moreover, in American law, the Erie doctrine requires federal courts sitting in diversity actions to apply state substantive law, but in a manner consistent with how the court believes the state's highest court would rule in that case. Since such decisions are not binding on state courts, but are often very well-reasoned and useful, state courts cite federal interpretations of state law fairly often as persuasive precedent, although it is also fairly common for a state high court to reject a federal court's interpretation of its jurisprudence. Nonprecedential decisions: unpublished decisions, non-publication and depublication, noncitation rules Nonpublication of opinions, or unpublished opinions, are those decisions of courts that are not available for citation as precedent because the judges making the opinion deem the cases as having less precedential value. Selective publication is the legal process which a judge or justices of a court decide whether a decision is to be or not published in a reporter. "Unpublished" federal appellate decisions are published in the Federal Appendix. Depublication is the power of a court to make a previously published order or opinion unpublished. Litigation that is settled out of court generates no written decision, thus has no precedential effect. As one practical effect, the U.S. Department of Justice settles many cases against the federal government simply to avoid creating adverse precedent. Res judicata, claim preclusion, collateral estoppel, issue preclusion, law of the case Several rules may cause a decision to apply as narrow "precedent" to preclude future legal positions of the specific parties to a case, even if a decision is non-precedential with respect to all other parties. Res judicata, claim preclusion Once a case is decided, the same plaintiff cannot sue the same defendant again on any claim arising out of the same facts. The law requires plaintiffs to put all issues on the table in a single case, not split the case. For example, in a case of an auto accident, the plaintiff cannot sue first for property damage, and then personal injury in a separate case. This is called res judicata or claim preclusion ("'Res judicata'" is the traditional name going back centuries; the name shifted to "claim preclusion" in the United States over the late 20th century). Claim preclusion applies regardless of the plaintiff wins or loses the earlier case, even if the later case raises a different legal theory, even the second claim is unknown at the time of the first case. Exceptions are extremely limited, for example if the two claims for relief must necessarily be brought in different courts (for example, one claim might be exclusively federal, and the other exclusively state). Collateral estoppel, issue preclusion Once a case is finally decided, any issues decided in the previous case may be binding against the party who lost the issue in later cases, even in cases involving other parties. For example, if a first case decides that a party was negligent, then other plaintiffs may rely on that earlier determination in later cases, and need not reprove the issue of negligence. For another example, if a patent is shown to be invalid in a case against one accused infringer, that same patent is invalid against all other accused infringers—invalidity need not be reproven. Again, limits and exceptions on this principle exist. The principle is called collateral estoppel or issue preclusion. law of the case Within a single case, once there's been a first appeal, both the lower court and the appellate court itself will not further review the same issue, and will not re-review an issue that could have been appealed in the first appeal. Exceptions are limited to three "exceptional circumstances:" (1) when substantially different evidence is raised at a subsequent trial, (2) when the law changes after the first appeal, for example by a decision of a higher court, or (3) when a decision is clearly erroneous and would result in a manifest injustice. This principle is called "law of the case". On many questions, reasonable people may differ. When two of those people are judges, the tension among two lines of precedent may be resolved as follows. Jurisdictional splits: disagreements among different geographical regions or levels of federalism If the two courts are in separate, parallel jurisdictions, there is no conflict, and two lines of precedent may persist. Courts in one jurisdiction are influenced by decisions in others, and notably better rules may be adopted over time. Splits among different areas of law Courts try to formulate the common law as a "seamless web" so that principles in one area of the law apply to other areas. However, this principle does not apply uniformly. Thus, a word may have different definitions in different areas of the law, or different rules may apply so that a question has different answers in different legal contexts. Judges try to minimize these conflicts, but they arise from time to time, and under principles of 'stare decisis', may persist for some time. Matter of first impression A matter of first impression (also known as an "issue of first impression," "case of first impression," or, in Latin, as primae impressionis) is an issue where the parties disagree on what the applicable law is, and there is no prior binding authority, so that the matter has to be decided for the first time. A first impression case may be a first impression in only a particular jurisdiction. By definition, a case of first impression cannot be decided by precedent. Since there is no precedent for the court to follow, the court uses the plain language and legislative history of any statute that must be interpreted, holdings of other jurisdictions, persuasive authority and analogies from prior rulings by other courts (which may be higher, peers, or lower courts in the hierarchy, or from other jurisdictions), commentaries and articles by legal scholars, and the court's own logic and sense of justice. Contrasting role of case law in common law, civil law, and mixed systems The different roles of case law in civil law and common law traditions create differences in the way that courts render decisions. Common law courts generally explain in detail the legal rationale behind their decisions, with citations of both legislation and previous relevant judgments, and often an exegesis of the wider legal principles. These are called ratio decidendi and constitute a precedent binding on other courts; further analyses not strictly necessary to the determination of the current case are called obiter dicta, which have persuasive authority but are not technically binding. By contrast, decisions in civil law jurisdictions are generally very short, referring only to statutes. The reason for this difference is that these civil law jurisdictions apply legislative positivism — a form of extreme legal positivism — which holds that legislation is the only valid source of law because it has been voted on democratically; thus, it is not the judiciary's role to create law, but rather to interpret and apply statute, and therefore their decisions must reflect that. Civil law systems Stare decisis is not usually a doctrine used in civil law systems, because it violates the legislative positivist principle that only the legislature may make law. Instead, the civil law system relies on the doctrine of jurisprudence constante, according to which if a court has adjudicated a consistent line of cases that arrive at the same holdings using sound reasoning, then the previous decisions are highly persuasive but not controlling on issues of law. This doctrine is similar to stare decisis insofar as it dictates that a court's decision must condone a cohesive and predictable result. In theory, lower courts are generally not bound by the precedents of higher courts. In practice, the need for predictability means that lower courts generally defer to the precedent of higher courts. As a result, the precedent of courts of last resort, such as the French Cassation Court and the Council of State, is recognized as being de facto binding on lower courts. The doctrine of jurisprudence constante also influences how court decisions are structured. In general, court decisions of common law jurisdictions give a sufficient ratio decidendi as to guide future courts. The ratio is used to justify a court decision on the basis of previous case law as well as to make it easier to use the decision as a precedent for future cases. By contrast, court decisions in some civil law jurisdictions (most prominently France) tend to be extremely brief, mentioning only the relevant legislation and codal provisions and not going into the ratio decidendi in any great detail. This is the result of the legislative positivist view that the court is only interpreting the legislature's intent and therefore detailed exposition is unnecessary. Because of this, ratio decidendi is carried out by legal academics (doctrinal writers) who provide the explanations that in common law jurisdictions would be provided by the judges themselves. In other civil law jurisdictions, such as the German-speaking countries, ratio decidendi tend to be much more developed than in France, and courts will frequently cite previous cases and doctrinal writers. However, some courts (such as German courts) have less emphasis on the particular facts of the case than common law courts, but have more emphasis on the discussion of various doctrinal arguments and on finding what the correct interpretation of the law is. The mixed systems of the Nordic countries are sometimes considered a branch of the civil law, but they are sometimes counted as separate from the civil law tradition. In Sweden, for instance, case law arguably plays a more important role than in some of the continental civil law systems. The two highest courts, the Supreme Court (Högsta domstolen) and the Supreme Administrative Court (Högsta förvaltningsdomstolen), have the right to set precedent which has persuasive authority on all future application of the law. Appellate courts, be they judicial (hovrätter) or administrative (kammarrätter), may also issue decisions that act as guides for the application of the law, but these decisions are persuasive, not controlling, and may therefore be overturned by higher courts. Mixed or bijuridical systems Some mixed systems, such as Scots law in Scotland, South-African law, and the law of Quebec and Louisiana, do not fit into the civil vs. common law dichotomy because they mix portions of both. Such systems may have been heavily influenced by the common law tradition; however, their private law is firmly rooted in the civil law tradition. Because of their position between the two main systems of law, these types of legal systems are sometimes referred to as "mixed" systems of law. Louisiana courts, for instance, operate under both stare decisis and jurisprudence constante. In South Africa, the precedent of higher courts is absolutely or fully binding on lower courts, whereas the precedent of lower courts only has persuasive authority on higher courts; horizontally, precedent is prima facie or presumptively binding between courts. Role of academics in civil law jurisdictions Law professors in common law traditions play a much smaller role in developing case law than professors in civil law traditions. Because court decisions in civil law traditions are brief and not amenable to establishing precedent, much of the exposition of the law in civil law traditions is done by academics rather than by judges; this is called doctrine and may be published in treatises or in journals such as Recueil Dalloz in France. Historically, common law courts relied little on legal scholarship; thus, at the turn of the twentieth century, it was very rare to see an academic writer quoted in a legal decision (except perhaps for the academic writings of prominent judges such as Coke and Blackstone). Today academic writers are often cited in legal argument and decisions as persuasive authority; often, they are cited when judges are attempting to implement reasoning that other courts have not yet adopted, or when the judge believes the academic's restatement of the law is more compelling than can be found in precedent. Thus common law systems are adopting one of the approaches long common in civil law jurisdictions. Justice Louis Brandeis, in a heavily footnoted dissent to Burnet v. Coronado Oil & Gas Co., 285 U.S. 393, 405–411 (1932), explained (citations and quotations omitted): - Stare decisis is not ... a universal, inexorable command. "The rule of stare decisis, though one tending to consistency and uniformity of decision, is not inflexible. Whether it shall be followed or departed from is a question entirely within the discretion of the court, which is again called upon to consider a question once decided." Stare decisis is usually the wise policy, because in most matters it is more important that the applicable rule of law be settled than that it be settled right. This is commonly true even where the error is a matter of serious concern, provided correction can be had by legislation. But in cases involving the Federal Constitution, where correction through legislative action is practically impossible, this Court has often overruled its earlier decisions. The Court bows to the lessons of experience and the force of better reasoning, recognizing that the process of trial and error, so fruitful in the physical sciences, is appropriate also in the judicial function. ... In cases involving the Federal Constitution the position of this Court is unlike that of the highest court of England, where the policy of stare decisis was formulated and is strictly applied to all classes of cases. Parliament is free to correct any judicial error; and the remedy may be promptly invoked. - The reasons why this Court should refuse to follow an earlier constitutional decision which it deems erroneous are particularly strong where the question presented is one of applying, as distinguished from what may accurately be called interpreting, the Constitution. In the cases which now come before us there is seldom any dispute as to the interpretation of any provision. The controversy is usually over the application to existing conditions of some well-recognized constitutional limitation. This is strikingly true of cases under the due process clause when the question is whether a statute is unreasonable, arbitrary or capricious; of cases under the equal protection clause when the question is whether there is any reasonable basis for the classification made by a statute; and of cases under the commerce clause when the question is whether an admitted burden laid by a statute upon interstate commerce is so substantial as to be deemed direct. ... The United States Court of Appeals for the Third Circuit has stated: - A judicial precedent attaches a specific legal consequence to a detailed set of facts in an adjudged case or judicial decision, which is then considered as furnishing the rule for the determination of a subsequent case involving identical or similar material facts and arising in the same court or a lower court in the judicial hierarchy. The United States Court of Appeals for the Ninth Circuit has stated: - Stare decisis is the policy of the court to stand by precedent; the term is but an abbreviation of stare decisis et non quieta movere — "to stand by and adhere to decisions and not disturb what is settled". Consider the word "decisis". The word means, literally and legally, the decision. Under the doctrine of stare decisis a case is important only for what it decides — for the "what", not for the "why", and not for the "how". Insofar as precedent is concerned, stare decisis is important only for the decision, for the detailed legal consequence following a detailed set of facts. - [T]hat is the way of the common law, the judges preferring to go 'from case to case, like the ancient Mediterranean mariners, hugging the coast from point to point, and avoiding the dangers of the open sea of system or science. Precedent viewed against passing time can serve to establish trends, thus indicating the next logical step in evolving interpretations of the law. For instance, if immigration has become more and more restricted under the law, then the next legal decision on that subject may serve to restrict it further still. The existence of submerged precedent (reasoned opinions not made available through conventional legal research sources) has been identified as a potentially distorting force in the evolution of law. Scholars have recently attempted to apply network theory to precedent in order to establish which precedent is most important or authoritative, and how the court's interpretations and priorities have changed over time. Early English common law did not have or require the stare decisis doctrine for a range of legal and technological reasons: - During the formative period of the common law, the royal courts constituted only one among many fora in which in the English could settle their disputes. The royal courts operated alongside and in competition with ecclesiastic, manorial, urban, mercantile, and local courts. - Royal courts were not organised into a hierarchy, instead different royal courts (exchequer, common pleas, king's bench, and chancery) were in competition with each other. - Substantial law on almost all matters was neither legislated nor codified, eliminating the need for courts to interpret legislation. - Common law's main distinctive features and focus were not substantial law, which was customary law, but procedural. - The practice of citing previous cases was not to find binding legal rules but as evidence of custom. - Customary law was not a rational and consistent body of rules and does not require a system of binding precedent. - Before the printing press, the state of the written records of cases rendered the stare decisis doctrine utterly impracticable. These features changed over time, opening the door to the doctrine of stare decisis: By the end of the eighteenth century, the common law courts had absorbed most of the business of their nonroyal competitors, although there was still internal competition among the different common law courts themselves. During the nineteenth century, legal reform movements in both England and the United States brought this to an end as well by merging the various common law courts into a unified system of courts with a formal hierarchical structure. This and the advent of reliable private case reporters made adherence to the doctrine of stare decisis practical and the practice soon evolved of holding judges to be bound by the decisions of courts of superior or equal status in their jurisdiction. United States legal system Stare decisis applies to the holding of a case, rather than to obiter dicta ("things said by the way"). As the United States Supreme Court has put it: "dicta may be followed if sufficiently persuasive but are not binding." In the United States Supreme Court, the principle of stare decisis is most flexible in constitutional cases: Stare decisis is usually the wise policy, because in most matters it is more important that the applicable rule of law be settled than that it be settled right. ... But in cases involving the Federal Constitution, where correction through legislative action is practically impossible, this Court has often overruled its earlier decisions. ... This is strikingly true of cases under the due process clause.— Burnet v. Coronado Oil & Gas Co., 285 U.S. 393, 406–407, 410 (1932) (Brandeis, J., dissenting). For example, in the years 1946–1992, the U.S. Supreme Court reversed itself in about 130 cases. The U.S. Supreme Court has further explained as follows: [W]hen convinced of former error, this Court has never felt constrained to follow precedent. In constitutional questions, where correction depends upon amendment, and not upon legislative action, this Court throughout its history has freely exercised its power to reexamine the basis of its constitutional decisions. The United States Supreme Court has stated that where a court gives multiple reasons for a given result, each alternative reason that is "explicitly" labeled by the court as an "independent" ground for the decision is not treated as "simply a dictum". English legal system The doctrine of binding precedent or stare decisis is basic to the English legal system. Special features of the English legal system include the following: The Supreme Court's ability to override its own precedent The British House of Lords, as the court of last appeal outside Scotland before it was replaced by the UK Supreme Court, was not strictly bound to always follow its own decisions until the case London Street Tramways v London County Council AC 375. After this case, once the Lords had given a ruling on a point of law, the matter was closed unless and until Parliament made a change by statute. This is the most strict form of the doctrine of stare decisis (one not applied, previously, in common law jurisdictions, where there was somewhat greater flexibility for a court of last resort to review its own precedent). This situation changed, however, after the issuance of the Practice Statement of 1966. It enabled the House of Lords to adapt English law to meet changing social conditions. In R v G & R 2003, the House of Lords overruled its decision in Caldwell 1981, which had allowed the Lords to establish mens rea ("guilty mind") by measuring a defendant's conduct against that of a "reasonable person," regardless of the defendant's actual state of mind. However, the Practice Statement has been seldom applied by the House of Lords, usually only as a last resort. As of 2005, the House of Lords has rejected its past decisions no more than 20 times. They are reluctant to use it because they fear to introduce uncertainty into the law. In particular, the Practice Statement stated that the Lords would be especially reluctant to overrule themselves in criminal cases because of the importance of certainty of that law. The first case involving criminal law to be overruled with the Practice Statement was Anderton v Ryan (1985), which was overruled by R v Shivpuri (1986), two decades after the Practice Statement. Remarkably, the precedent overruled had been made only a year before, but it had been criticised by several academic lawyers. As a result, Lord Bridge stated he was "undeterred by the consideration that the decision in Anderton v Ryan was so recent. The Practice Statement is an effective abandonment of our pretension to infallibility. If a serious error embodied in a decision of this House has distorted the law, the sooner it is corrected the better." Still, the House of Lords has remained reluctant to overrule itself in some cases; in R v Kansal (2002), the majority of House members adopted the opinion that R v Lambert had been wrongly decided and agreed to depart from their earlier decision. Distinguishing precedent on legal (rather than fact) grounds A precedent does not bind a court if it finds there was a lack of care in the original "Per Incuriam". For example, if a statutory provision or precedent had not been brought to the previous court's attention before its decision, the precedent would not be binding. Rules of statutory interpretation One of the most important roles of precedent is to resolve ambiguities in other legal texts, such as constitutions, statutes, and regulations. The process involves, first and foremost, consultation of the plain language of the text, as enlightened by the legislative history of enactment, subsequent precedent, and experience with various interpretations of similar texts. Statutory interpretation in the U.K. A judge's normal aids include access to all previous cases in which a precedent has been set, and a good English dictionary. Judges and barristers in the U.K use three primary rules for interpreting the law. Under the literal rule, the judge should do what the actual legislation states rather than trying to do what the judge thinks that it means. The judge should use the plain everyday ordinary meaning of the words, even if this produces an unjust or undesirable outcome. A good example of problems with this method is R v Maginnis (1987), in which several judges in separate opinions found several different dictionary meanings of the word supply. Another example is Fisher v Bell, where it was held that a shopkeeper who placed an illegal item in a shop window with a price tag did not make an offer to sell it, because of the specific meaning of "offer for sale" in contract law. As a result of this case, Parliament amended the statute concerned to end this discrepancy. The golden rule is used when use of the literal rule would obviously create an absurd result. There are two ways in which the golden rule can be applied: a narrow method, and a broad method. Under the narrow method, when there are apparently two contradictory meanings to the wording of a legislative provision, or the wording is ambiguous, the least absurd is to be preferred. Under the broad method, the court modifies the literal meaning in such a way as to avoid the absurd result. An example of the latter approach is Adler v George (1964). Under the Official Secrets Act 1920 it was an offence to obstruct HM Forces "in the vicinity of" a prohibited place. Adler argued that he was not in the vicinity of such a place but was actually in it. The court chose not to read the statutory wording in a literal sense to avoid what would otherwise be an absurd result, and Adler was convicted. The mischief rule is the most flexible of the interpretation methods. Stemming from Heydon's Case (1584), it allows the court to enforce what the statute is intended to remedy rather than what the words actually say. For example, in Corkery v Carpenter (1950), a man was found guilty of being drunk in charge of a carriage, although in fact he only had a bicycle. Statutory Interpretation in the United States In the United States, the courts have stated consistently that the text of the statute is read as it is written, using the ordinary meaning of the words of the statute. - "[I]n interpreting a statute a court should always turn to one cardinal canon before all others. ... [C]ourts must presume that a legislature says in a statute what it means and means in a statute what it says there." Connecticut Nat'l Bank v. Germain, 112 S. Ct. 1146, 1149 (1992). Indeed, "[w]hen the words of a statute are unambiguous, then, this first canon is also the last: 'judicial inquiry is complete.' " - "A fundamental rule of statutory construction requires that every part of a statute be presumed to have some effect, and not be treated as meaningless unless absolutely necessary." Raven Coal Corp. v. Absher, 153 Va. 332, 149 S.E. 541 (1929). - "In assessing statutory language, unless words have acquired a peculiar meaning, by virtue of statutory definition or judicial construction, they are to be construed in accordance with their common usage." Muller v. BP Exploration (Alaska) Inc., 923 P.2d 783, 787–88 (Alaska 1996). However, most legal texts have some lingering ambiguity—inevitably, situations arise in which the words chosen by the legislature do not address the precise facts in issue, or there is some tension among two or more statutes. In such cases, a court must analyze the various available sources, and reach a resolution of the ambiguity. The "Canons of statutory construction" are discussed in a separate article. Once the ambiguity is resolved, that resolution has binding effect as described in the rest of this article. Although inferior courts are bound in theory by superior court precedent, in practice a judge may believe that justice requires an outcome at some variance with precedent, and may distinguish the facts of the individual case on reasoning that does not appear in the binding precedent. On appeal, the appellate court may either adopt the new reasoning, or reverse on the basis of precedent. On the other hand, if the losing party does not appeal (typically because of the cost of the appeal), the lower court decision may remain in effect, at least as to the individual parties. Occasionally, a lower court judge explicitly states personal disagreement with the judgment he or she has rendered, but that he or she is required to do so by binding precedent. Note that inferior courts cannot evade binding precedent of superior courts, but a court can depart from its own prior decisions. In the United States, stare decisis can interact in counterintuitive ways with the federal and state court systems. On an issue of federal law, a state court is not bound by an interpretation of federal law at the district or circuit level, but is bound by an interpretation by the United States Supreme Court. On an interpretation of state law, whether common law or statutory law, the federal courts are bound by the interpretation of a state court of last resort, and are required normally to defer to the precedent of intermediate state courts as well. Courts may choose to obey precedent of international jurisdictions, but this is not an application of the doctrine of stare decisis, because foreign decisions are not binding. Rather, a foreign decision that is obeyed on the basis of the soundness of its reasoning will be called persuasive authority — indicating that its effect is limited to the persuasiveness of the reasons it provides. Originalism is an approach to interpretation of a legal text in which controlling weight is given to the intent of the original authors (at least the intent as inferred by a modern judge). In contrast, a non-originalist looks at other cues to meaning, including the current meaning of the words, the pattern and trend of other judicial decisions, changing context and improved scientific understanding, observation of practical outcomes and "what works," contemporary standards of justice, and stare decisis. Both are directed at interpreting the text, not changing it—interpretation is the process of resolving ambiguity and choosing from among possible meanings, not changing the text. The two approaches look at different sets of underlying facts that may or may not point in the same direction--stare decisis gives most weight to the newest understanding of a legal text, while originalism gives most weight to the oldest. While they don't necessarily reach different results in every case, the two approaches are in direct tension. Originalists such as Justice Antonin Scalia argue that "Stare decisis is not usually a doctrine used in civil law systems, because it violates the principle that only the legislature may make law." Justice Scalia argues that America is a civil law nation, not a common law nation. By principle, originalists are generally unwilling to defer to precedent when precedent seems to come into conflict with the originalist's own interpretation of the Constitutional text or inferences of original intent (even in situations where there is no original source statement of that original intent). However, there is still room within an originalist paradigm for stare decisis; whenever the plain meaning of the text has alternative constructions, past precedent is generally considered a valid guide, with the qualifier being that it cannot change what the text actually says. Originalists vary in the degree to which they defer to precedent. In his confirmation hearings, Justice Clarence Thomas answered a question from Senator Strom Thurmond, qualifying his willingness to change precedent in this way: I think overruling a case or reconsidering a case is a very serious matter. Certainly, you would have to be of the view that a case is incorrectly decided, but I think even that is not adequate. There are some cases that you may not agree with that should not be overruled. Stare decisis provides continuity to our system, it provides predictability, and in our process of case-by-case decision-making, I think it is a very important and critical concept. A judge that wants to reconsider a case and certainly one who wants to overrule a case has the burden of demonstrating that not only is the case incorrect, but that it would be appropriate, in view of stare decisis, to make that additional step of overruling that case.— Possibly he has changed his mind, or there are a very large body of cases which merit "the additional step" of ignoring the doctrine; according to Scalia, "Clarence Thomas doesn't believe in stare decisis, period. If a constitutional line of authority is wrong, he would say, let's get it right." Professor Caleb Nelson, a former clerk for Justice Thomas and law professor at the University of Virginia, has elaborated on the role of stare decisis in originalist jurisprudence: American courts of last resort recognize a rebuttable presumption against overruling their own past decisions. In earlier eras, people often suggested that this presumption did not apply if the past decision, in the view of the court's current members, was demonstrably erroneous. But when the Supreme Court makes similar noises today, it is roundly criticized. At least within the academy, conventional wisdom now maintains that a purported demonstration of error is not enough to justify overruling a past decision. ...[T]he conventional wisdom is wrong to suggest that any coherent doctrine of stare decisis must include a presumption against overruling precedent that the current court deems demonstrably erroneous. The doctrine of stare decisis would indeed be no doctrine at all if courts were free to overrule a past decision simply because they would have reached a different decision as an original matter. But when a court says that a past decision is demonstrably erroneous, it is saying not only that it would have reached a different decision as an original matter, but also that the prior court went beyond the range of indeterminacy created by the relevant source of law. ... Americans from the Founding on believed that court decisions could help "liquidate" or settle the meaning of ambiguous provisions of written law. Later courts generally were supposed to abide by such "liquidations." ... To the extent that the underlying legal provision was determinate, however, courts were not thought to be similarly bound by precedent that misinterpreted it. ... Of the Court's current members, Justices Scalia and Thomas seem to have the most faith in the determinacy of the legal texts that come before the Court. It should come as no surprise that they also seem the most willing to overrule the Court's past decisions. ... Prominent journalists and other commentators suggest that there is some contradiction between these Justices' mantra of "judicial restraint" and any systematic re-examination of precedent. But if one believes in the determinacy of the underlying legal texts, one need not define "judicial restraint" solely in terms of fidelity to precedent; one can also speak of fidelity to the texts themselves.— Advantages and disadvantages There are disadvantages and advantages of binding precedent, as noted by scholars and jurists. Criticism of precedent In a 1997 book, attorney Michael Trotter blamed over-reliance by American lawyers on binding and persuasive authority, rather than the merits of the case at hand, as a major factor behind the escalation of legal costs during the 20th century. He argued that courts should ban the citation of persuasive precedent from outside their jurisdiction, with two exceptions: - (1) cases where the foreign jurisdiction's law is the subject of the case, or - (2) instances where a litigant intends to ask the highest court of the jurisdiction to overturn binding precedent, and therefore needs to cite persuasive precedent to demonstrate a trend in other jurisdictions. The disadvantages of stare decisis include its rigidity, the complexity of learning law, the differences between some cases may be very small and appear illogical, and the slow growth or incremental changes to the law that are in need of major overhaul. Regarding constitutional interpretations, there is concern that over-reliance on the doctrine of stare decisis can be subversive. An erroneous precedent may at first be only slightly inconsistent with the Constitution, and then this error in interpretation can be propagated and increased by further precedent until a result is obtained that is greatly different from the original understanding of the Constitution. Stare decisis is not mandated by the Constitution, and if it causes unconstitutional results then the historical evidence of original understanding can be re-examined. In this opinion, predictable fidelity to the Constitution is more important than fidelity to unconstitutional precedent. See also the living tree doctrine. Agreement with precedent A counter-argument (in favor of the advantages of stare decisis) is that if the legislature wishes to alter the case law (other than constitutional interpretations) by statute, the legislature is empowered to do so. Critics[who?] sometimes accuse particular judges of applying the doctrine selectively, invoking it to support precedent that the judge supported anyway, but ignoring it in order to change precedent with which the judge disagreed. There is much discussion about the virtue of using stare decisis. Supporters of the system, such as minimalists, argue that obeying precedent makes decisions "predictable". For example, a business person can be reasonably assured of predicting a decision where the facts of his or her case are sufficiently similar to a case decided previously. This parallels the arguments against retroactive (ex post facto) laws banned by the U.S. Constitution. - Case citation - Case of first impression - Commanding precedent - Custom (law) - First impression - Law of Citations (Roman concept) - Legal opinion - Memorandum opinion - Persuasive precedent - Precedent book - Question of fact - Ratio decidendi - "Precedent". Dictionary.com. Retrieved September 6, 2018. - Black's Law Dictionary, p. 1059 (5th ed. 1979). - Pattinson, Shaun D (2015-03-01). "The Human Rights Act and the doctrine of precedent". Legal Studies. 35 (1): 142–164. doi:10.1111/lest.12049. ISSN 1748-121X. - Adeleye, Gabriel et al. World Dictionary of Foreign Expressions: a Resource for Readers and Writers, page 371 (1999). - Kmiec, Keenan. The Origin and Current Meanings of "Judicial Activism", California Law Review (2004): Some instances of disregarding precedent are almost universally considered inappropriate. For example, in a rare showing of unity in a Supreme Court opinion discussing judicial activism, Justice Stevens wrote that a circuit court "engaged in an indefensible brand of judicial activism" when it "refused to follow" a "controlling precedent" of the Supreme Court. The rule that lower courts should abide by controlling precedent, sometimes called "vertical precedent," can safely be called settled law. It appears to be equally well accepted that the act of disregarding vertical precedent qualifies as one kind of judicial activism. "Horizontal precedent," the doctrine requiring a court "to follow its own prior decisions in similar cases," is a more complicated and debatable matter....[A]cademics argue that it is sometimes proper to disregard horizontal precedent. Professor Gary Lawson, for example, has argued that stare decisis itself may be unconstitutional if it requires the Court to adhere to an erroneous reading of the Constitution. "If the Constitution says X and a prior judicial decision says Y, a court has not merely the power, but the obligation, to prefer the Constitution." In the same vein, Professors Ahkil Amar and Vikram Amar have stated, "Our general view is that the Rehnquist Court's articulated theory of stare decisis tends to improperly elevate judicial doctrine over the Constitution itself." It does so, they argue, "by requiring excessive deference to past decisions that themselves may have been misinterpretations of the law of the land. For Lawson, Akhil Amar, and Vikram Amar, dismissing erroneous horizontal precedent would not be judicial activism; instead, it would be appropriate constitutional decisionmaking.— Walton Myers - "Archived copy" (PDF). Archived from the original (PDF) on 2013-05-01. Retrieved 2013-05-01. - Coale & Dyrek, "First Impressions", Appellate Advocate (Winter 2012). - Auto Equity Sales, Inc. v. Superior Court, 57 Cal. 2d 450 (1962). - "Mandatory v. Persuasive". Faculty.law.lsu.edu. Archived from the original on 2012-10-25. Retrieved 2012-11-02. - People v. Leonard, 40 Cal. 4th 1370, 1416 (2007) (Ninth Circuit decisions do not bind Supreme Court of California). - "51 Texas Law Review 1972-1973 Binding Effect of Federal Declaratory Judgments on State Courts Comment". Heinonline.org. Retrieved 2012-11-02. - United States federal courts - Wrabley, Colin E. "Applying Federal Court of Appeals' Precedent: Contrasting Approaches to Applying Court of Appeals' Federal Law Holdings and Erie State Law Predictions, 3 Seton Hall Circuit Rev. 1 (2006)" (PDf). m.reedsmith.com. - Marjorie D. Rombauer, Legal Problem Solving: Analysis, Research and Writing, pp. 22-23 (West Publishing Co., 3d ed. 1978). (Rombauer was a professor of law at the University of Washington.) - Sinclair, Michael. "Precedent, Super-Precedent" Archived 2007-07-04 at the Wayback Machine., George Mason Law Review (14 Geo. Mason L. Rev. 363) (2007) - Landes, William & Posner, Richard. "Legal Precedent: A Theoretical and Empirical Analysis", 19 Journal of Law and Economics 249, 251 (1976). - Hayward, Allison. The Per Curiam Opinion of Steel: Buckley v. Valeo as Superprecedent?, Cato Supreme Court Review 195, 202, (2005-2006). - Maltz, Earl. "Abortion, Precedent, and the Constitution: A Comment on Planned Parenthood of Southeastern Pennsylvania v. Casey", 68 Notre Dame L. Rev. 11 (1992), quoted by Rosen, Jeffrey.So, Do You Believe in 'Superprecedent'?, New York Times (2005-10-30). - Benac, Nancy (2005-09-13). "Roberts Repeatedly Dodges Roe v. Wade". Associated Press. Archived from the original on 2012-08-31. - Coale & Couture, Loud Rules, 34 Pepperdine L. Rev. 3 (2007). - Allegheny General Hospital v. NLRB, 608 F.2d 965, 969-970 (3rd Cir. 1979) (footnote omitted), as quoted in United States Internal Revenue Serv. v. Osborne (In re Osborne), 76 F.3d 306, 96-1 U.S. Tax Cas. (CCH) paragr. 50,185 (9th Cir. 1996). - United States Internal Revenue Serv. v. Osborne (In re Osborne), 76 F.3d 306, 96-1 U.S. Tax Cas. (CCH) paragr. 50,185 (9th Cir. 1996). - Elizabeth Y. McCuskey, Clarity and Clarification: Grable Federal Questions in the Eyes of Their Beholders, 91 NEB. L. REV. 387, 427-430 (2012). - James H. Fowler and Sangick Jeon, "The Authority of Supreme Court Precedent," Social Networks (2007), doi:10.1016/j.socnet.2007.05.001 - Hasnas, John. HAYEK, THE COMMON LAW, AND FLUID DRIVE (PDF). 1. NYU Journal of Law & Liberty. pp. 92–93. - Central Green Co. v. United States, 531 U.S. 425 (2001), quoting Humphrey's Executor v. United States, 295 U. S. 602, 627 (1935). - "FindLaw | Cases and Codes". Caselaw.lp.findlaw.com. Retrieved 2012-11-02. - Congressional Research Service,Supreme Court Decisions Overruled by Subsequent Decision Archived 2012-01-13 at the Wayback Machine. (1992). - "FindLaw | Cases and Codes". Caselaw.lp.findlaw.com. Retrieved 2012-11-02. - See O'Gilvie v. United States, 519 U.S. 79, 84 (1996). - Martin, Jacqueline (2005). The English Legal System (4th ed.), p. 25. London: Hodder Arnold. ISBN 0-340-89991-3. - "The Golden Rule". Lawade.com. Retrieved 29 March 2018. - "Part E - The rules of statutory interpretation - The golden rule". Labspace. Retrieved 11 December 2012. - See, e.g., State Oil Co. v. Khan, 93 F.3d 1358 (7th Cir. 1996), in which Judge Richard Posner followed the applicable Supreme Court precedent, while harshly criticizing it, which led the Supreme Court to overrule that precedent in State Oil Co. v. Khan, 522 U.S. 3 (1997); see also the concurring opinion of Chief Judge Walker in National Abortion Federation v. Gonzalez, 437 F. 3d 278 (2d Cir. 2006). - See, e.g., Hilton vs. Carolina Pub. Rys. Comm'n., 502 U.S. 197, 202, 112 S. Ct. 560, 565 (1991)("we will not depart from the doctrine of stare decisis without some compelling justification"). - A Matter of Interpretation. - Thomas, Clarence (1991). [U.S.] Senate Confirmation Hearings. qtd. by Jan Crawford Greenburg on PBS (June 2003) Accessed 8 January 2007 UTC. - Ringel, Jonathan (2004). "Fulton County Daily Report - The Bombshell in the Clarence Thomas Biography". www.dailyreportonline.com. - Nelson, Caleb (2001). "Stare Decisis and Demonstrably Erroneous Precedent" (PDF). Virginia Law Review, 84 Va L. Rev. 1, 2001. Archived from the original (PDF) on 2012-05-22. - Michael H. Trotter, Profit and the Practice of Law: What's Happened to the Legal Profession (Athens, GA: University of Georgia Press, 1997), 161-163. - Berland, David (2011). Note, "Stopping the Pendulum: Why Stare Decisis Should Constrain the Court from Further Modification of the Search Incident to Arrest Exception". University of Illinois Law Review (2011 U. Ill. L. Rev. 695). |Library resources about |
Definitions (Being a nerd is actually good. This is useful stuff) Defining words used in law is a crucial part of understanding what the law means for everyone, most specifically the government. It commands us with the intention of upholding our liberties. Defining words used in law brings clarity to everyone and with that clarity, civility will sustain itself within society. So let’s have a look at our roots… Ultra Vires-Intra Vires Ultra vires [Lat, “beyond the powers”] is used in Constitutional Law by the courts who must decide the respective competences of Parliament and provincial legislatures. If one or the other, in enacting a law, goes beyond the jurisdiction allotted to it by the constitution, the court will declare that measure ultra vires. If not, the court will declare it intra vires [Lat, “within the powers”]. These 2 expressions also apply to Administrative Law, the law of local collectivities, corporate law, etc. Many bodies, eg, municipalities, school boards and corporations, have powers delegated to them by Parliament or provincial legislatures. These delegated bodies may, within their established limits, adopt regulations which, to be valid, must not exceed the limits prescribed by law. Under constitutional law, particularly in Canada and the United States, constitutions give federal and provincial or state governments various powers. To go outside those powers would be ultra vires; for example, although the court did not use the term in striking down a federal law in United States v. Lopez on the grounds that it exceeded the Constitutional authority of Congress, the Supreme Court still declared the law to be ultra vires. According to Article 15.2 of the Irish constitution, the Oireachtas (parliament) is the sole lawmaking body in the Republic of Ireland. In the case of CityView Press v AnCo, however, the Irish Supreme Court held that the Oireachtas may delegate certain powers to subordinate bodies through primary legislation, so long as these delegated powers allow the delegatee only to further the principles and policies laid down by the Oireachtas in primary legislation and not craft new principles or policies themselves. Any piece of primary legislation that grants the power to make public policy to a body other than the Oireachtas is unconstitutional; however, as there is a presumption in Irish constitutional law that the Oireachtas acts within the confines of the Constitution, any legislation passed by the Oireachtas must be interpreted in such a way as to be constitutionally valid where possible. Thus, in a number of cases where bodies other than the Oireachtas were found to have used powers granted to them by primary legislation to make public policy, the impugned primary legislation was read in such a way that it would not have the effect of allowing a subordinate body to make public policy. In these cases, the primary legislation was held to be constitutional, but the subordinate or secondary legislation, which amounted to creation of public policy, was held to be ultra vires the primary legislation and was struck down. In UK constitutional law, ultra vires describes patents, ordinances and the like enacted under the prerogative powers of the Crown that contradict statutes enacted by the King-in-Parliament. Almost unheard of in modern times, ultra vires acts by the Crown or its servants were previously a major threat to the rule of law. Boddington v British Transport Police is an example of an appeal heard by House of Lords that contested that a bylaw was beyond the powers conferred to it under section 67 of the Transport Act 1962. in right of - (law) a power held as a consequence of another power, or held as a consequence of a relationship.In right of her being president of the Board, she is also the chair of Board meetings. The husband held title to the land in right of his wife (see also jure uxoris). - (law, government) jurisdiction of a person who is head of state of more than one state.The Queen of Canada in Right of Quebec is suing the Queen of Canada in Right of Newfoundland. De Jure and De Facto de jure: “of right”; in accordance with law de facto: exercising power as if legally constituted or lawfully authorized Maxims of Law tell the story… Canada “is” founded upon common law principles that recognize the supremacy of God and the rule of law. In essence, these principles are the rule of law which our courts must follow as they have over many centuries before Canada, and even America for that matter, even existed. These principles were originally written in Latin and are known as Maxims of Law. They are seen as “axioms” in law – indisputable truths. Let us share some of these principles with you: In regards to the supremacy of God and your natural God-given rights: Lex spectat naturae ordinem means, “The law regards the order of nature”. And Jura naturae sunt immutabilia means, “The laws of nature are unchangeable”. The words nature and natural are used in regards to a man having natural rights – those rights that cannot be taken away by anyone or thing. This is Natural Law – the Canadian Bill of Rights and the Canadian Charter of Rights and Freedoms state this (as stated above). Also see unalienable rights below. Maxims related to “consent” and/or “contracting”: Id quod nostrum est sine facto nostro ad alium transferri non potest means, “That which is ours cannot be transferred to another without our act (consent)”. And Nil consensui tam contrarium est quam vis atque metus means, “Nothing is so opposed to consent as force and fear”. Maxims related to freedom and slavery: Omnes homines aut liberi sunt aut servi means, “All men are free men or slaves”. Clearly, this states that there is “no in between” here, you are either free or you are a slave. And Libertas omnibus rebus favorabilior est which means, “Liberty is favored over all things”. This indisputable rule of law is one that every “lawful” court must follow to ensure our liberties remain intact. We have de facto (unlawful) courts that will not operate in this fashion – this needs to change. Maxims relating to freeing one’s self from a tyrannical system: Nihil tam naturale est, quam eo genere quidque dissolvere, quo colligatum est; ideo verborum obligatio verbis tollitur; nudi consensus obligatio contrario consensus dissolvitur means, “Nothing is so natural as to dissolve anything in the way in which it was bound together; therefore the obligation of words is taken away by words; the obligation of mere consent dissolved by the contrary consent”. Or more to the point, Non refert verbis an factis fit revocation which means, “It does not matter whether a revocation is made by words or by acts”. This means you can lawfully remove yourself from an unlawful system. If the people operating within that system do not recognize your position in common law and force you to do things against your will – ignoring the exercising of your God-given Natural Rights – then you are a slave!! With the people having little knowledge of the origins of their “legal name” – the ALL CAPITAL name derived from the Birth Certificate (ask yourself what this “legal” document really is?) – they have unwittingly waived their natural God-given rights with its use. This “language” is another language outside of our “inherent” common law language. This is called DOG_LATIN and Black’s Law Dictionary Revised 4th Edition defines it as: “The Latin of illiterate persons.” Capitus Diminutio Maxima means, “The highest or most comprensive loss of status. This occurred when a man’s condition was changed from one of freedom to one of bondage, when he became a slave. It swept away with it all rights of citizenship and all family rights”. Finally, Misera est servitus, ubi jus est vagum aut incertum means, “It is a wretched state of slavery which subsists where the law is vague or uncertain”. Folks, what we have discussed here should be mandatory education in our schools. So it goes well beyond vague or uncertainty as the system we have breeds ignorance. Ask yourself, “How can we protect and preserve our freedoms if we don’t know the roots (above) of our freedoms and the basic court procedures (the tool) to preserve them?” It’s time for change people! “Ask not what your country can do for you, ask what you can do for your country.” ~ John F. Kennedy (See our quotes page for further motivation) Note: These maxims can be found in Common Law dictionaries such as Black’s and Bouvier’s. Further reading on the subjects of Maxims and Natural Law can be viewed with the following links: Maxims of Common Law and The Search for Natural Law. These 2 documents were written by a successful lawyer in the United States who sells a law course called Jurisdictionary. Unalienable: 1)The state of a thing or right which cannot be sold. 2)Things which are not in commerce, as public roads, are in their nature unalienable. Some things are unalienable, in consequence of particular provisions in the law forbidding their sale or transfer, as pensions granted by the government. The natural rights of life and liberty are unalienable. ~Bouviers Law Dictionary, 1856 Edition Unalienable: incapable of being alienated, that is, sold and transferred. ~Black’s Law Dictionary, Sixth Edition You can not surrender, sell or transfer unalienable rights, they are a gift from the Creator to the individual and can not under any circumstances be surrendered or taken. All individual’s have unalienable rights. Inalienable rights: Rights which are not capable of being surrendered or transferred without the consent of the one possessing such rights. ~Morrison v. State, Mo. App., 252 S.W.2d 97, 101 You can surrender, sell or transfer inalienable rights if you consent either actually or constructively. Inalienable rights are not inherent in man and can be alienated by government. Persons have inalienable rights. Most state constitutions recognize only inalienable rights. We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness. That to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed. That whenever any form of government becomes destructive to these ends, it is the right of the people to alter or to abolish it, and to institute new government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness. ~Declaration Of Independence, 1776 “Persons are of two kinds, natural and artificial. A natural person is a human being. Artificial persons include a collection or succession of natural persons forming a corporation; a collection of property to which the law attributes the capacity of having rights and duties. The latter class of artificial persons is recognized only to a limited extent in our law.” ~Black’s Law Dictionary Revised 4th Edition “Husband and wife are considered one person in law” (Vir et uxor censentum in lege una persona ~Black’s Law Dictionary 7th Edition INDIVIDUAL “Under the Canadian Bill of Rights, the right of the individual extends to natural persons only, and not to corporations. ~R. v. Colgate-Palmolive Ltd. (1972), 8 C.C.C. (2d) 40 (Ont.Co.Ct.)…” “Every person is a human being, but not every human being a person” — Omnis persona est homo, sed non vieissim. “Man” (homo) is a term of nature; “person” (persona), a term of civil law” — Homo vocabulum est naturae; persona juris civilis. Void ab initio: Void from the beginning; from the first fact. Void on its face, a nullity, without force and effect. Actus legis nemini facit injuriam: An act of law does injury to no man. Patriate: verb (transitive) 1. to bring under the authority of an autonomous country, for example as in the transfer of the Canadian constitution from UK to Canadian responsibility for the first time.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2006 July 26 Explanation: Spectacular explosions keep occurring in the binary star system named RS Ophiuchi. Every 20 years or so, the red giant star dumps enough hydrogen gas onto its companion white dwarf star to set off a brilliant thermonuclear explosion on the white dwarf's surface. At about 2,000 light years distant, the resulting nova explosions cause the RS Oph system to brighten up by a huge factor and become visible to the unaided eye. The red giant star is depicted on the right of the above drawing, while the white dwarf is at the center of the bright accretion disk on the left. As the stars orbit each other, a stream of gas moves from the giant star to the white dwarf. Astronomers speculate that at some time in the next 100,000 years, enough matter will have accumulated on the white dwarf to push it over the Chandrasekhar Limit, causing a much more powerful and final explosion known as a supernova. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: EUD at NASA / GSFC & Michigan Tech. U.
Difference Between Tidal Wave and Tsunami Tidal Wave vs Tsunami Most people assume that there is no difference between a tidal wave and a tsunami, and often use the words interchangeably. This is inaccurate, and while both of the waves carry the power of destruction, the greatest difference is how each is born. A tidal wave is directly impacted by the atmosphere. The correlating factors between the sun, moon, and Earth cause a disturbance in the sea, and a ‘shallow water wave’ is formed. Shallow water waves imply that the development of a tidal wave is much closer to the shoreline of a land mass, that will ultimately be in its path. However, because of the depth relating to it origins, it is possible that a tidal wave can ‘burn itself out’ before it reaches the land. The origin of the tsunami is much deeper. It is caused by a deep disturbance along the ocean floor. This disturbance usually comes from an underwater earthquake, or even an underwater landslide. The deeper origin of the tsunami creates a more emphatic wave. It will often carry itself across hundreds, or even thousands, of miles of ocean before making landfall. The tidal wave has what we would call regional preferences. It is unlikely that a tidal wave would make landfall in areas of temperate climates, or northern countries. The various elements which cause its development form, in their precise manner, in lower latitudes, creating a higher possibility for landfall in places like the West Indies, for example. The tidal wave follows the currents, and therefore, is only able to strike areas within the current flow. The tsunami has the potential to develop anywhere. The placement of the earthquake or landslide, or even the unique event of an underwater eruption, compels the start of the wave. Just like the tidal wave, the tsunami also follows the currents. Yet, since the development of the underwater event can happen within the current flow heading toward the US, Canada, or Great Britain, it could be assumed that a tsunami can hit one of these usually unaffected countries. Most people who do understand the difference between the two waves are inclined to believe that the tsunami is more destructive than the tidal wave. While in many cases, this is a correct assumption, a blanket statement is not necessarily true. The size of the waves is determined by many varying factors, including the wind’s direction and speed. Search DifferenceBetween.net : Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family. Leave a Response
policy experts (see White, 1957; United Nations, 1970). Although many local and state interests supported federal dam, lock, levee, and canal construction, efforts to create independent, executive authorities to develop river basins generally were resisted by both Congress and the states. Origins of the river basin planning concept in the United States date back to observations and ideas of John Wesley Powell and his studies in the western United States, as well as President Theodore Roosevelt, who, when transmitting the Inland Waterways Commission preliminary report of 1908, stated, “Each river system from its headwaters in the forest to its mouth on the coast, is a unit and should be treated as such” (White, 1957). The concept of integrating water development plans and projects across a river system was brought to focus in basin scale for rivers such as the Allegheny and Monongahela, the Columbia, and the Missouri. The basin program that commanded the most attention in this era was for the Tennessee River. Development of the Tennessee Valley region, via the Tennessee Valley Authority established in 1933, was promoted as a model of unified river basin development, both domestically and abroad. President Roosevelt planned to apply the concept in the Missouri River basin, but the states and Congress blocked efforts to create a similar federal authority for the Missouri in the 1944 Flood Control Act and the “Pick Sloan” legislation (Ferrell, 1993; NRC, 2002). Following the New Deal era, federal support for large dam construction began to wane in the 1950s. The Eisenhower Administration (1952-1960) followed a “no-new starts” policy and stressed increased local responsibilities for smaller projects. A new era of dam building was initiated by the Kennedy administration, and new Corps dams were built in the 1960s in the southeastern and midwestern United States. The Johnson Administration placed a high priority on river basin planning, and the Water Resources Planning Act of 1965 created seven river basin commissions coordinated by a federal Water Resources Council (WRC). However, because Congress was funding fewer dams, levees, and canals, these commissions had no clearly defined role, as noted by the National Water Commission (NWC), which operated between 1968 and 1973 (NWC, 1973). The NWC also looked ahead to the changing roles of the Corps of Engineers. The NWC 1973 report identified many of the problems with trying to adapt a new project construction model to changing water demands that the Corps was facing. For example, the commission noted that “The Corps . . . is not likely to exist as an agency specializing in the construction of great engi-
Most programming languages offered buffered I/O features by default, since it makes generating output much more efficient. These buffered I/O facilities typically "Just Work" out of the box. But sometimes they don't. When we say they "don't work" what we mean is that excess buffering occurs, causing data not to be printed in a timely manner. This is typically fixed by explicitly putting a "flush" call in the code, e.g. with something like sys.stdout.flush() in Python, fflush(3) in C, or std::flush in C++. Frequently when people are confused about the rules of buffering their code becomes littered with unnecessary flush statements, an example of cargo-cult programming. In this post I'll explain the buffering rules for stdout, so you'll never be confused again. Why Buffering Exists As already discussed, the problem with buffering is that it can cause output to be delayed. So why does it exist at all? At the underlying system call level, data is written to file descriptors using write(2). This method takes a file descriptor and a byte buffer, and writes the data in the byte buffer to the file descriptor. Most languages have very fast function calls. The overhead for a function call in a compiled language like C or C++ is just a few CPU cycles. In these languages it's common to think of functions call overhead as negligible, and only in extreme cases are functions marked as inline. However, a system call is much more expensive. A system call on Linux takes closer to a thousand CPU cycles and implies a context switch. Thus system calls are significantly more expensive than regular userspace function calls. The main reason why buffering exists is to amortize the cost of these system calls. This is primarily important when the program is doing a lot of these write calls, as the amortization is only effective when the system call overhead is a significant percentage of the program's time. Let's consider what happens when you use grep to search for a pattern in an input file (or stdin). Suppose you're grepping nginx logs for a pattern—say lines from a particular IP address. A typical line length in these nginx logs might be 100 characters. That means that if buffering wasn't used, for each matching line in the input file that grep needs to print, it would invoke the write(2) system call. This would happen over and over again, and each time the average buffer size would be 100 bytes. If, instead, a 4096-byte buffer size is used then data won't be flushed until the 4096-byte buffer fills up. This means that in this mode the grep command would wait until it had about 40 lines of input before the byte buffer filled up. Then it would flush the buffer by invoking write(2) with a pointer to the 4096-byte buffer. This effectively transforms forty system calls into one, yielding a 40x decrease in system call overhead. Not bad! If the grep command is sending a lot of data to stdout you won't even notice the buffering delay. And a grep command matching a simple pattern can easily spend more time trying to print data than actually filtering the input data. But suppose instead the grep pattern occurs very infrequently. Suppose it's so uncommon that a matching input line is only found once every 10 seconds. Then we'd have to wait about 400 seconds (more than six minutes!) before seeing any output, even though grep actually found data within the first ten seconds. This buffering can be especially insidious in certain shell pipelines. For instance, suppose we want to print the first matching line in a log file. The invocation might be: # BAD: grep will buffer output before sending it to head grep RAREPATTERN /var/log/mylog.txt | head -n 1 Going with the previous example, we would like this command to complete within ten seconds, since that's the average amount of time it will take grep to find the input pattern in this file. But if buffering is enabled then the pipeline will instead take many minutes to run. In other words, in this example buffering makes the program strictly slower, not faster! Even in cases where the output isn't being limited by a command like if output is very infrequent then buffering can be extremely annoying and provide essentially zero performance improvement. When Programs Buffer, And When They Don't There are typically three modes for buffering: - If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block). - If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn't flushed until it fills up. - If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \nis seen, and then all of the data that buffered is flushed at that point in time. In reality there's typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like "buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first". GNU libc (glibc) uses the following rules for buffering: |stdout (not a TTY)||output||fully-buffered| As you can see, the behavior for stdout is a bit unusual: the exact behavior for stdout depends on whether or not it appears to be a TTY. The rationale here is that when stdout is a TTY it means a user is likely watching the command run and waiting for output, and therefore printing data in a timely manner is most important. On the other hand, if the output isn't a TTY the assumption is that the data is being processed or saved for later use, and therefore efficiency is more important. Most other programming languages have exactly the same rules: either because those languages implement function routines as calls to buffered libc output commands (such as printf(3)), or because they actually implement the same logic. More Grep Examples Grep is a special case for buffering because a grep command can turn a large amount of input data into a slow and small stream of output data. Therefore grep is particularly susceptible to buffering frustration. Knowing when grep will buffer data is easy: it follows the glibc buffering rules described above. If the output of grep is a TTY then it will be line-buffered. If the output of grep is sent to a file or a pipe, it will be fully-buffered, as the output destination is not a TTY. This grep command will be line-buffered, since stdout is a TTY: # line-buffered grep RAREPATTERN /var/log/mylog.txt If stdout is redirected to a file then stdout is no longer a TTY, and output will be fully-buffered. This is usually fine: # fully-buffered grep RAREPATTERN /var/log/mylog.txt >output.txt One situation where the previous example isn't ideal is if you have another terminal output that is trying to tail -f the output file. Suppose we want to search the file backwards by piping tac(1) to grep. This will be line-buffered, as grep is still the last command in the pipeline and thus stdout is still a TTY: # line-buffered tac /var/log/mylog.txt | grep RAREPATTERN But what if we want to filter the output of grep? If we use a shell pipeline this will cause the grep output to become buffered. For instance, consider the following: # fully-buffered grep RAREPATTERN /var/log/mylog.txt | cut -f1 The issue here is that when we put a pipe after the grep command, grep's stdout is now the file descriptor for a pipe. Pipes are not TTYs, and thus grep will go into fully-buffered mode. For the grep command the solution is to use the --line-buffered option to # forced line-buffering grep --line-buffered RAREPATTERN /var/log/mylog.txt | cut -f1 As noted earlier, you may also want to use this when redirecting grep output to a file and then consuming the file in another session using If you're writing your own C code, you can control the buffering for streams using setbuf(3). Using this you can force behavior such as always line-buffering stdout. You can also use this for disk-backed files, so you can do things like write a file to disk and have fprintf(3) be automatically line-buffered. GNU coreutils comes with a program allows you to change the default buffering behavior of programs you don't control. There are a few caveats for target programs: the programs must use C FILE* streams, and the programs can't use the explicit buffer control routines There's one further gotcha that typically pops up in C++ programs. Many C++ programmers are accustomed to using std::endl for newlines. For instance, // Two ways to print output with a newline ending. std::cout << "Hello, world!\n"; std::cout << "Hello, world!" << std::endl; These are not the same. The difference is that when std::endl is used it automatically forces the output stream to be flushed, regardless of the output mode of the stream. For instance, // Subject to normal buffering rules. std::cout << "Hello, world!\n"; // These are equivalent and are *always* line-buffered. std::cout << "Hello, world!\n" << std::flush; std::cout << "Hello, world!" << std::endl; Thus if you're using std::endl a lot then the usual buffering rules don't std::endl is effectively forcing line-buffering! This can be important in certain performance sensitive programs, since using inadvertently disable buffering. My suggestion is: only use std::endl when you actually want to flush the output stream. If you don't know if the stream should be forcibly flushed then stick to using a regular \n sequence in your code.
A dry climate is defined by yearly precipitation that is less than the loss of water through evaporation. Dryness is not only related to annual rainfall total but is also a function of evaporation, which is closely dependent on temperature. In a dry climate, the rainfall ranges between 30 inches and 50 inches per year. The perfect example of a dry climate is a tropical savanna. These areas have two seasons in a year: namely dry and wet seasons. This kind of climate is found in Africa, India, Malaysia, Australia and northern parts of South America with average temperatures at or above 64 °F.
Combat Bullying with PBS LearningMedia Resources Help your students to recognize bullying and respond to conflict thoughtfully by integrating these PBS LearningMedia resources into your lesson plans. Register today for additional content about cyber-bullying, communication, and cultural diversity. Conflict Resolution: Thinking it Through Grades 7-13+ | Video | Communication Skills In this video clip, a group of high school students participate in a conflict resolution workshop led by an experienced facilitator and social worker. After each student describes a recent conflict with a friend or family, students explore ways they commonly handle conflicts. Use this resource as a conversation starter in your own class. Grades 7-12 | Video | Types of Bullying Cyber-bullying is where one or more children targets another through technology. Learn how to stop cyber-bullying in a variety of ways. Show your students how bullies use text messages and the Internet to threaten others, and discuss the steps that they can take to address this form of harassment. The Teenage Brain Grades 6-8 | Video | Biology & Behavior Why do teenagers act the way they do? This video segment from FRONTLINE: “Inside the Teenage Brain” explores the work scientists are doing to explain some of the mysteries of teenage behavior. Ask students to consider how natural changes in the brain play a role in the way teens relate to one another. Succeeding in School Grades 5-12 | Video | Story of Success Inspire your students with the personal story of Omarina, a student who faced crippling odds but was able to succeed with support from her school community. Use this resource to illustrate the impact of a supportive learning environment. Martha Speaks: Martha Walks the Dog Grades K-1, 4-5 | Video In this video from MARTHA SPEAKS, there’s a new dog in town and he is loud and mean! Even though nothing seems to calm him down, Martha is determined to make friends. Use this resource to show your young students that first impressions can be deceiving. Dinosaur Train: Including Friends PreK-1 | Video Watch a “Dinosaur Train” video clip in which Tiny the Pteranodon feels left out from the Theropod club. Talk about feeling like an outsider with your class. Students can recognize what actions a friend can take to help someone feel included and valued. Category: PBS LearningMedia
AMONG the most pernicious forms of pollution threatening the health of humans and wildlife everywhere are a group of chemicals that have far-reaching effects on the environment. These include such things as dioxins and various agricultural chemicals like DDT. Collectively they are known as the “dirty dozen”. At a United Nations meeting in Stockholm on May 22nd-23rd more than 100 countries adopted a plan either to immediately ban or to phase out these twelve pollutants. Although it would make the world a much safer place, the clean-up will be hard to achieve. The dozen chemicals, known officially as persistent organic pollutants (POPs), are highly toxic and cause disease, birth defects and possibly tens of thousands of deaths every year. Many are cancer-causing agents and some disrupt the nervous system. What makes the pollutants particularly dangerous is that it can take decades for them to break down. Nor do the dozen chemicals respect borders. Once in the environment, POPs can circulate around the world. Through a process called bioaccumulation they are then absorbed into the tissue of animals, where they can reach damaging, even lethal, concentrations as they work their way up the food chain to humans. Some of the chemicals, such as dioxins, are by-products of incineration and other industrial processes. The countries that met in Stockholm plan to continually reduce these emissions. Other chemicals will be banned immediately because safer versions are already available, although because these alternatives usually cost more the bans will be hard to enforce in poor countries. Life giver and taker The elimination of other toxic pollutants will have to wait until safer alternatives can be found or widely applied by countries. DDT, for instance, is already largely banned in rich countries but is still widely used in some tropical regions to control mosquitoes which carry malaria, still a big killer. So, while DDT may, on balance, save lives in hot countries, its residue, along with that of other POPs, can build up in colder climates where its effects are wholly damaging. In the Arctic, for instance, there is a lack of soil and vegetation to help absorb pollutants. This means toxins readily find their way into fish, sea mammals and bird eggs, which make up a large part of the diet of the native Inuit people. As a result, the concentration of POPs in Inuit can reach many thousand times that found in people elsewhere. The chemicals are also passed on by Inuit mothers to their children. A similar exemption will be made for polychlorinated biphenyls, or PCBs, which are widely used in electrical transformers and other equipment. PCBs are no longer produced, but because such a vast amount of equipment containing the chemicals is still in operation, governments will be able to carry on using such equipment until 2025, providing they maintain it to prevent leaks. Despite the exemptions and the difficulty of enforcing bans, some environmental groups think the Stockholm treaty is at least a step in the right direction. The plan is that eventually other toxic chemicals will be added to the list. But first it will be up to the governments of individual countries to ratify the treaty. Such things can take years, even decades, although some UN officials are confident that the Stockholm agreement may come into force in about four years. This happens once 50 nations have ratified it. Such confidence may seem misplaced given the current gloom about the prospects of international co-operation on environmental issues, the direct result of George Bush's decision in March to reject the Kyoto protocol on global warming. But the Stockholm agreement may be different. Mr Bush has already said that the United States will ratify the treaty because the chemicals “can harm Americans even when released abroad.” Cynics suggest his enthusiasm for the POPs treaty is mostly to help boost his tattered environmental credentials. The treaty, after all, will have little immediate effect on America. It has already banned or greatly restricted most of the dirty dozen. Nevertheless the very fact that the Bush administration is supporting the treaty should be welcomed, both on its own merits, and because it should make it more difficult for the administration to act unilaterally on other environmental issues without facing accusations of self-serving inconsistency.
Problem *3.111 [Difficulty: 4] Open-Ended Problem Statement: Consider a conical funnel held upside down and submerged slowly in a container of water. Discuss the force needed to submerge the funnel if the spout is open to the atmosphere. Compare with the force needed to submerge the funnel when the spout opening is blocked by a rubber stopper. Discussion: Let the weight of the funnel in air be W a . Assume the funnel is held with its spout vertical and the conical section down. Then W a will also be vertical. Two possible cases are with the funnel spout open to atmosphere or with the funnel spout sealed. With the funnel spout open to atmosphere, the pressures inside and outside the funnel are equal, so no net pressure force acts on the funnel. The force needed to support the funnel will remain constant until it first contacts the water. Then a buoyancy force will act vertically upward on every element of volume located beneath the water surface. The first contact of the funnel with the water will be at the widest part of the conical section. The buoyancy This is the end of the preview. Sign up access the rest of the document.
How to become an effective Leader2 Opinions Within psychology the topic of leadership has been extensively studied over the last hundred years. Leadership is defined as the process by which an individual influences a group of individuals to achieve a common goal. Leaders have two main responsibilities: to ensure the demands of the organization are satisfied and to ensure the needs of the group are satisfied. The first person to study the topic of leadership was Thomas Carlyle who developed the great man theory of leadership during the 19th century. This theory argued that great leaders are born and not made. This theory of leadership links in with trait theories which argue that leaders are born with specific characteristics which make them who they are. In the 1920s the trait theory was considered to be one of the most influential. After the Second World War the trait theory lost favor and researchers started to recognize how important the environment was for developing leaders. Recently researchers have adopted an interactional approach to studying leadership which looks at the relationship between the environment and the leaders characteristics. Researchers believe that great leaders have common personality traits that are appropriate to specific leadership roles. However it can be argued that no one set of characteristics ensure successful leaders. Therefore effective leadership styles must fit a specific situation. To understand leadership we shall look at two main theories which are the Multidimensional model of leadership (MML) and transformational leaders. The MML is an interactional model which has been developed specifically for sport and physical activity. The model argues that leadership style will vary depending on the characteristics of the athletes and the constraints of the situation. In the diagram below it can be seen that leader characteristics (age, gender, experience) compose personal factors and situational and member characteristics (age, gender, ability) are the situational factors. The model argues that a positive outcome is more likely if the three aspects of the leader behaviour agree. If the leader behaves appropriately for the situation and these behaviours match the preferences of the group members, the group members will achieve successful performance. This model takes into account the different leadership styles which a person can adopt and how the leader will change his style depending on the situation. The required leader behavior refers to how a person is expected to act in a particular situation. For example a PE teacher is expected to act in a certain way in front of his pupils. Preferred leader behavior is dependent on the group members. Age, ability and gender will all influence a member’s preference for coaching. For example older athletes might prefer a coach who is more autocratic whereas younger athletes might prefer a coach who is more democratic. Finally the actual leader behavior is simply the behavior that the leader exhibits. When trying to understand the topic of leadership we must realise that a leaders behaviour is very much dependent on the situation and the members characteristics. Research has shown differences in gender and age among leadership style and therefore we must take this into account when adopting a leadership style. Research has shown that women prefer leaders who adopt a democratic style and involve members of the team and men prefer leaders who adopt an autocratic style (Horn, 2002). As well as this it has been found that as people get older they prefer more autocratic leaders who are willing to lead people in the right direction. However research has shown that 10 to 13 year olds and 14 to 17 year olds do not differ in their preferences for leadership style. It was found that both age groups preferred leaders who gave positive feedback, technical and tactical instructions and social support. When coaching we must acknowledge these differences and be sure to understand the different environments which we work in. In relation to an exercise setting it can be argued that the MML model of leadership lacks research. Chemers (2001) developed a MML model of leadership for an exercise setting that consisted of 3 components: image maintenance, relationship development and resource deployment. Image maintenance refers to how the exercise leader uses his image to arouse feelings of trust in their followers. Relationship development refers to the relationships that the leader will develop with followers to help them achieve their goals. Finally resource deployment refers to how the leader uses his knowledge and skills to help the group achieve their goals. A study conducted by Estabrooks et al (2004) looked at leadership style in an exercise setting and found that older participants preferred leaders who were qualified, had a good bond with followers and used their knowledge to get the best out of the group. Therefore if you are a personal trainer who works in large groups be sure to take these 3 components into account. Make sure you try to create a good bond among your followers so that you can help get the best out of the exercise group. Recently there has been a lot of research focusing on transformatonal leadership. Transformational leadership is based on developing and selling a vision for what is possible. Transformational leaders initiate change by challenging the organizational status quo. This leadership style is important in times of change, growth and crisis and is most successful within organizations that thrive on change and innovation. “There are four components of a transformational leader: (a) idealized influence or charisma reflected in the ability to inspire others through personality and vision, prompting followers to exert extra effort, persistence, and determination to achieve extraordinary results; (b) inspirational motivation or the ability to clearly articulate shared goals and a vision for the organization, providing inspiration and motivation to followers; (c) intellectual stimulation or the ability to encourage innovation and creativity from followers and (d) individual consideration or the creation of a supportive work environment that recognizes individual differences”(Vidic & Burton, 2011, p280). Transformational leadership has been studied across many settings, with results consistently showing the effectiveness of this leadership in affecting change. Research has found that transformation leaders increase well being, self efficacy and group cohesion. From this research it can be seen that there are many factors that affect a persons leadership style. We must realize that one size does not fit all and that the environment, member characteristics and leader characteristics play an important part in leadership. By reading this article I hope that I have given you the opportunity to reflect on the leadership style which you adopt. As a coach, manager or personal trainer, try and use this article in a positive way to improve your leadership style. Try to look at aspects of the Transformational leader which you can use to develop as a leader. Buy and download up to 400 infographics!Buy infographics Created in 2013, BelievePerform has rapidly grown to become one of the largest Sport Psychology sites in the world. We are proud to boast over 150 writers for our site including a number of elite athletes.
1. In 1996, NASA researchers reported that a meteorite contained evidence that life once existed on Mars. But others argued that the evidence was most likely caused by inorganic processes that could be recreated artificially. A second group of NASA researchers (containing some scientists from the first study) has reexamined the 1996 findings using a new analysis technique called ion beam milling, and they again claim that living organisms are most likely responsible for the materials found in the meteorite. The new study not only reexamined the contents of the meteorite itself, named ALH84001, but tested the alternative, non-biological hypothesis. "In this study, we interpret our results to suggest that the in situ inorganic hypotheses are inconsistent with the data, and thus infer that the biogenic hypothesis is still a viable explanation," says Kathie Thomas-Keprta, a senior scientist for Barrios Technology at Johnson Space Center in Houston 47 page pdf of the new analysis of the Mars "fossilized life" rock. “The evidence supporting the possibility of past life on Mars has been slowly building up during the past decade,” said McKay, NASA chief scientist for exploration and astrobiology, JSC. “This evidence includes signs of past surface water including remains of rivers, lakes and possibly oceans, signs of current water near or at the surface, water-derived deposits of clay minerals and carbonates in old terrain, and the recent release of methane into the Martian atmosphere, a finding that may have several explanations, including the presence of microbial life, the main source of methane on Earth." 2. "By being stuck at Troy [crater on Mars], Spirit [Mars Robotic Rover] has been able to teach us about the modern water cycle on Mars." Indeed, Spirit's saga at Troy has given scientists material evidence of past water on Mars on two time scales: ancient volcanic times, and cycles ongoing to the present day.
Corns and calluses are areas of hard, thickened skin that develop when the skin is exposed to excessive pressure or friction. They commonly occur on the feet and can cause pain and discomfort when you walk. Corns are small circles of thick skin that usually develop on the tops and sides of toes or on the sole of the foot. However, they can occur anywhere. Corns are often caused by: - wearing shoes that fit poorly - shoes that are too loose can allow your foot to slide and rub - certain shoe designs that place excessive pressure on an area of the foot - for example, high-heeled shoes can squeeze the toes Corns often occur on bony feet as there's a lack of natural cushioning. They can also develop as a symptom of another foot problem, such as: - a bunion - where the joint of the big toe sticks outwards as the big toe begins to point towards the other toes on the same foot - hammer toe - where the toe is bent at the middle joint Calluses are hard, rough areas of skin that are often yellowish in colour. They can develop on the: - feet - usually around the heel area or on the skin under the ball of the foot - palms of the hands Calluses are larger than corns and don't have such a well-defined edge. As callused skin is thick, it's often less sensitive to touch than the surrounding skin. Calluses develop when the skin rubs against something, such as a bone, a shoe or the ground. They often form over the ball of your foot because this area takes most of your weight when you walk. Activities that put repeated pressure on the foot, such as running or walking barefoot, can cause calluses to form. Athletes are particularly susceptible to them. Other possible causes of calluses include: - dry skin - reduced fatty padding - elderly people have less fatty tissue in their skin - regularly holding objects such as a hammer or racquet Treating corns and calluses If you have a corn on your foot, you should see a podiatrist, also known as a chiropodist, who can advise you about treatment. Your GP may be able to refer you on the NHS, but each case is decided by your local CCG. If your condition is unlikely to affect your health or mobility, you may not be eligible for NHS treatment. Corns on feet won't get better unless the cause of the pressure is removed. If the cause isn't removed, the skin could become thicker and more painful over time. A corn is a symptom of an underlying problem. You should only treat it yourself if you know the cause and you've spoken to a specialist about the best way to manage it. Over-the-counter treatments for corns, such as corn plasters, are available from pharmacists. However, they don't treat the cause of the corn and may affect the normal, thinner skin surrounding the corn. Corn plasters may not be suitable for certain people, such as those with diabetes, circulation problems, or fragile skin. As with corns, you should only treat calluses yourself after a podiatrist has identified the cause and advised you about treatment. The podiatrist may be able to treat corns or badly callused areas using a sharp blade to remove the thickened area of skin. This is painless and should help reduce pain and discomfort. They can also provide advice on self-care and prescribe special insoles. Read more about treating corns and calluses. Preventing corns and calluses You can also help prevent corns and calluses by looking after your feet and choosing the right shoes to wear. Follow the advice below to help stop any hard dry skin developing: - Dry your feet thoroughly after washing them and apply a special moisturising foot cream (not body lotion). - Use a pumice stone or foot file regularly to gently remove hard skin. If you use a pumice stone, make sure it dries completely between uses and doesn't harbour bacteria. - Wear comfortable footwear that fits properly. Always shop for shoes in the afternoon, because your feet swell as the day goes on. This means shoes that fit in the afternoon will be comfortable. You should be able to move your toes inside the shoe with a small gap between the front of the shoe and your longest toe. If possible, avoid wearing heels as they increase the pressure on the front of your feet. - Don't put up with foot pain as if it's normal. Either see a podiatrist directly or go to your GP, who may refer you to a podiatrist. They'll be able to investigate the underlying cause of your foot pain. Treating corns and calluses Treating corns and calluses Treating painful corns and calluses involves removing the cause of the pressure or friction and getting rid of the thickened skin. You may be advised to wear comfortable flat shoes instead of high-heeled shoes. If calluses develop on the hands, wearing protective gloves during repetitive tasks will give the affected area time to heal. If you're not sure what's causing a corn or callus, see your GP. They may refer you to a podiatrist (also called a chiropodist). Podiatrists specialise in diagnosing and treating foot problems. They'll examine the affected area and recommend appropriate treatment. See below for more information about podiatry and how to access it on the NHS. Hard skin removal A podiatrist may cut away some of the thickened skin using a sharp blade called a scalpel. This helps to relieve pressure on the tissue underneath. Don't try to cut the corn or callus yourself. You could make it more painful and it might become infected. You can use a pumice stone or foot file to rub down skin that's getting thick. Read more about preventing corns and calluses. Foot care products Pharmacies sell a range of products that allow thick, hard skin to heal and excessive pressure to be redistributed. Ask your GP, podiatrist or pharmacist to recommend the right product for you. Examples of products that can be used to treat corns and calluses include: - special rehydration creams for thickened skin - protective corn plasters - customised soft padding or foam insoles - small foam wedges that are placed between the toes to help relieve soft corns - special silicone wedges that change the position of your toes or redistribute pressure Some over-the-counter products used to treat corns and calluses may contain salicylic acid. Salicylic acid is used to help soften the top layer of dead skin so it can be easily removed. The products are mild and shouldn't cause any pain. Salicylic acid products are available for direct application (such as a liquid or gel) or in medicated pads or plasters. It's important to avoid products containing salicylic acid if you have: - a condition that causes problems with circulation - such as diabetes, peripheral arterial disease or peripheral neuropathy - cracked or broken skin on or around the corn or callus - fragile skin This is because there's an increased risk of damage to your skin, nerves and tendons. Salicylic acid can sometimes damage the skin surrounding a corn or callus. You can use petroleum jelly or a plaster to cover the skin around the corn or callus. Always read the instructions carefully before applying the product. Speak to your GP, podiatrist or pharmacist first if you're not sure which treatment is suitable. Podiatry is available free of charge on the NHS in most areas of the UK. However, availability may vary depending on where you live. Your case will be assessed individually, which may affect how long you'll need to wait to be seen. For example, people with severe diabetes are often given priority because the condition can cause serious foot problems to develop. If free NHS treatment isn't available in your area, your GP can still refer you to a local clinic for private treatment, but you'll have to pay. If you decide to contact a podiatrist yourself, make sure they're fully qualified and registered with the Health and Care Professions Council (HCPC) and an accredited member of one of the following organisations:
The reason the Moon's orbital distance from Earth is gradually increasing is due to the influence of the tides. Tides on Earth are created by the gravitational influence of both the Moon and the Sun that tug on the planet and cause it to bulge outward slightly at the equator. The majority of this equatorial bulging is actually due to the rotation of the Earth about its axis. This rotation raises the equator about 23 kilometers, or 0.4% of the Earth's radius, higher than it would be if the Earth did not rotate. The planet's shape is further deformed by the gravitational pull of the Moon. While the solid surface of the Earth is distorted by only a few centimeters, the primary effect is on the oceans that rise by a few meters. One might expect that the tidal bulge created by the Moon's gravitational attraction would point directly at the Moon. This would be the case if the Earth rotated about its axis at the same rate as the Moon orbits around the Earth. We know that this is not the case since the Earth makes a complete rotation about its axis in only 24 hours while it takes 27.3 days for the Moon to make a complete rotation around the Earth. Because of this difference in rotation speeds, the tidal bulge created by the Moon actually rotates ahead of the Moon in its orbit by about 3 degrees. Since the bulge leads the Moon in its orbit, its gravitational attraction on the Moon pulls the Moon forward. This effect increases the Moon's energy to resist the gravitational attraction of the Earth. The increase in energy allows the Moon to pull further away from Earth and increase its orbital distance. As its distance increases, the Moon's orbital speed decreases according to the law of conservation of angular momentum. Scientists have estimated the rate at which the Moon's orbit is increasing by studying fossil deposits left by the tides over long periods of time. A more accurate measurement is also available thanks to Apollo astronauts who placed corner cube reflectors on the lunar surface in the 1970s. By bouncing laser beams off these reflectors, scientists can obtain a very precise distance to the Moon. Both methods have shown that the Moon's orbit is increasing by 3.8 centimeters (1.5 inches) per year. The fossil records further indicate that this rate has remained nearly unchanged for the past 900 million years. Within 500 million to one billion years, however, the Moon will have moved far enough from Earth that total eclipses of the Sun will no longer be possible for observers here on Earth. By this time, the Moon will have moved about 5% further from Earth than it is today and will be too small to completely cover the disk of the Sun. Any solar eclipse that occurs after this time will be only a partial eclipse. The tidal effect is not only causing the Moon to move further away from Earth but is also slowing the Earth down. While the tidal bulge leads the Moon and pulls it forward, the Moon also exerts a gravitational attraction on the tidal bulge that pulls it backward. As the ocean waters are pulled across the ocean floor, friction is created that slows the Earth's rate of rotation about its axis. This effect slows the Earth's rotation by 0.0018 seconds, or about two milliseconds, per century. This process began billions of years ago once the Earth and Moon had formed. The rate at which the Moon's orbit increased was faster then given the Moon's closer location to Earth. As the tidal effect caused the Moon to spiral outward, its own rotation rate was decreased until the Moon always showed the same face to Earth. The Moon was slowed much more rapidly given its much smaller size compared to Earth. It is estimated that the Moon's rotation stopped within about 50 million years of its formation. This same phenomenon continues today as the tides slow Earth's rate of rotation. If allowed to continue to its conclusion, the length of a day on Earth and the time it takes for the Moon to orbit the Earth will equalize to about 55 days. Once that occurs, the Earth will always present the same face to the Moon just as we on Earth can only see one side of the Moon. The tidal bulge that now leads the Moon will instead point directly at the Moon and the Moon will stop moving further away from Earth. This process is known as tidal locking and is a common occurrence between other bodies in the solar system. Probably the closest parallel to the Earth-Moon system is Pluto and its moon Charon. Both bodies are much smaller than the Earth and Moon, but Charon is relatively much larger in comparison to Pluto since it is about 50% the size of the planet it orbits. Given their much closer relative sizes, Pluto and Charon became tidally locked in much less time than it will ultimately take the Earth and Moon to do so. - answer by Justine Whitman, 19 February 2006 Read More Articles: |Aircraft | Design | Ask Us | Shop | Search| |About Us | Contact Us | Copyright © 1997-2012|
The following is some information on South Africa: The San (Bushmen) are among the oldest indigenous peoples of South Africa. In 1488, a Portuguese navigator became the first European to round the Cape of Good Hope. Although European vessels frequently passed by South Africa on their way to E Africa and India, and sometimes stopped for provisions or rest, no permanent European settlement was made until 1652, when Jan van Riebeeck and about 90 other persons set up a provisioning station for the Dutch East India Company at Table Bay on the Cape of Good Hope. By 1662, about 250 Europeans were living near the Cape and gradually they moved inland. In 1689 about 200 Huguenot refugees, (escaping religious persecution) from Europe arrived. By 1707 there were about 1,780 freeholders of European descent in South Africa, and they owned about 1,100 s1aves. During the French Revolutionary and Napoleonic wars, the British replaced the Dutch at the Cape from 1795 to 1803 and again from 1806 to 1814. In 1833 slavery was abolished in the British Empire, an act that angered South African slaves owners. To escape the restrictions of British rule as well as to obtain new land, about 12,000 Boers left the Cape between 1835 and 1843 in what is known as the Great Trek. Some remained in the higbveld of the interior, forming isolated communities and small states. A large group traveled eastward into what became Natal. The first indentured laborers from India arrived in Natal to work on the sugar plantations, and by 1900 they outnumbered the whites there. Diamonds were discovered in 1867 and in 1870 at what be-came Kimberley; in 1886 gold was discovered. These discoveries (especially that of gold) spurred great economic development in S Africa during 1870-1900; foreign trade increased dramatically, rail expanded from 70 mi (110 km) in 1870 to 3,600 mi (5,790 km) in 1895, and the number of whites rose from about 300,000 in 1870 to about 1 million in 1900. In 1961, South Africa left the Commonwealth of Nations and became a republic. In 1984, a new constitution was made. The new Parliament included the House of Representatives, comprised of Coloreds; the House of Delegates, comprised of Indians; and the House of Assembly, comprised of whites. left the whites with more seats in the Parliament than the Indians and Coloreds combined. Blacks violently protested being shut out of the system. In 1989, President Botha fell ill and was succeeded, first as party leader, then as president, by F. W. de Klerk. De Klerk's government began relaxing apartheid restrictions and in 1990, Nelson Mandela was freed after 27 years of imprisonment and became of the recently legalized ANC. Despite obstacles and delays, an interim constitution was completed in 1993, ending nearly three centuries of white rule in South Africa and marking the end of white-minority rule on the African continent. In April 1994, the first multiracial election was held. The ANC won an overwhelming victory. and Nelson Mandela became president. In 1994 and 1995 the last vestiges of apartheid were dismantled and a new national constitution was approved and adopted in May 1996. The population of South Africa is 75% black (African) and 13% white (European), with about 9% people of mixed white, Malay, and black descent (formerly called "Colored"). and 3% of Asian (mostly Indian) background. South Africa has 11 official languages, nine of which are indigenous: Zulu, Xhosa, Tswana Sotho, Swazi, Venda, Ndebele, Pedi, and Tsonga. Many blacks also speak Afrikaans (the first language of about 60% of the whites and the majority of those of mixed race) or English (the first language of most of the rest of the nonblacks. 68% of the population is Christian, major groups include the Dutch Reformed, Anglican, Methodist, Roman Catholic, and Zionist churches. Over 28% of the population follows traditional African religions, and there are small minorities of Muslims, Hindus and Jews. Republic of South Africa, republic (1995 est. POP 45,095,000), and the land size 471,442 sq mil (1,221,037 sq km), S Africa. It borders on the Atlantic Ocean in the west, on Narnibia in the northwest, on Botswana and Zimbabwe in the north, on Mozambique and Swaziland in the northeast, and on the Indian Ocean in the east and 5outh. The largest city is Johannesburg. Cape Town is the legislative capital, Pretoria the administrative capital, and Bloeinfontein the judicial capital. Until about 1870 the economy of the region was almost entirely based on agriculture. With the discovery of diamonds and gold in the late 19th century mining became the foundation for rapid economic development. Whites largely control the economy, but nonwhites make up more than 75% of the workforce. South Africa is a world leader in the production of gold, diamonds, alumi nosilicates, chromium, manganese. vanadium. and platinum. Other leading minerals extracted are copper ore, coal, asbestos, iron ore, silver, and titanium. Uranium is also whole life mined. THE RICHES ARE THE PEOPLE OF AFRICA!
World War II and the Postwar Period The United States entered World War II in 1942. During the war, immigration decreased. There was fighting in Europe, transportation was interrupted, and the American consulates weren't open. Fewer than 10 percent of the immigration quotas from Europe were used from 1942 to 1945. In many ways, the country was still fearful of the influence of foreign-born people. The United States was fighting Germany, Italy, and Japan (also known as the Axis Powers), and the U.S. government decided it would detain certain resident aliens of those countries. (Resident aliens are people who are living permanently in the United States but are not citizens.) Oftentimes, there was no reason for these people to be detained, other than fear and racism. Beginning in 1942, the government even detained American citizens who were ethnically Japanese. The government did this despite the 14th Amendment of the Constitution, which says "nor shall any State deprive any person of life, liberty or property without the due process of law." Also because of the war, the Chinese Exclusion Act was repealed in 1943. China had quickly become an important ally of the United States against Japan; therefore, the U.S. government did away with the offensive law. Chinese immigrants could once again legally enter the country, although they did so only in small numbers for the next couple of decades. After World War II, the economy began to improve in the United States. Many people wanted to leave war-torn Europe and come to America. President Harry S. Truman urged the government to help the "appalling dislocation" of hundreds of thousands of Europeans. In 1945, Truman said, "everything possible should be done at once to facilitate the entrance of some of these displaced persons and refugees into the United States. " On January 7, 1948, Truman urged Congress to "pass suitable legislation at once so that this Nation may do its share in caring for homeless and suffering refugees of all faiths. I believe that the admission of these persons will add to the strength and energy of the Nation." Congress passed the Displaced Persons Act of 1948. It allowed for refugees to come to the United States who otherwise wouldn't have been allowed to enter under existing immigration law. The Act marked the beginning of a period of refugee immigration.
The children have been enjoying the sunny weather this week using the garden area of our outside area everyday. We have carried on with the Spring Theme as have been learning about lifecyles – primarily of frogs and butterflies. The vocabulary that the children learnt (and should be able to tell you) was: Lifecycle – how an animal or plant develops and changes over time Chrysalis/cocoon – a hard shell spun by a caterpillar to protect it during metamorphosis Metamorphosis – when an animal changes how it looks after it’s been born – not just growing bigger Also, ask them what happened in the stories of The Hungry Caterpillar and Tadpole’s Promise. We are still doing our scientific experiment with growing beans in different conditions and are now looking after caterpillars and tadpoles too! In maths we re-visited two things this week – number bonds to ten (two numbers that added together equal 10) and making repeating patterns. See if yor child can work out all the number bonds either using their fingers or ten items such as buttons. Also, if you give them a pile of resources ask them to make you a pattern. They will need a few of several different items e.g. some pnecils, toy cars and socks. Last week very few of the children read and whilst the weather has been lovely, we do ask that you try to listen to your child read for a few minutes several times a week. This is a well tested way to build confidence and fluency in their reading. Please support your child by doing this. The Reception Team
A refractive error is the inability of light rays to be focused sharply on the surface of retina. Light, or objects, not focused sharply results in blurry vision. Types of refractive errors include nearsightedness (aka myopia), farsightedness (aka hyperopia) and astigmatism. - Myopia = nearsightedness - Hyperopia = farsightedness - Astigmatism – a common secondary optical error Your refractive error is the measurement of the optical power of lenses needed to focus light sharply on the retina. Focusing System of the Eye In a normal healthy eye, light passes through the cornea, pupil, lens and vitreous before it is focused on the retina. The cornea focuses about 75% of light coming into your eye. More specifically, it is really the surface of the cornea which is responsible for focusing light. Slight changes to the surface of the cornea can result in dramatic changes in focusing power of the eye. Your natural lens is responsible for the balance of the focusing power of the eye. The pupil reduces the amount of light entering your eye, but has no direct impact on the focusing power of the eye. The vitreous also does nothing to focus light. The Retina Receives Light The retina is the light sensitive tissue which lines the inside of the eye. Light gets focused and captured by the retina. The retina then sends electronic impulses through the optic nerve which then stimulates certain regions of the brain giving us “vision.” Measuring Your Refractive Error Using a phoropter, your eye doctor can determine the power of the lenses needed to correct your refractive error and can write you a prescription to obtain glasses or contact lenses so that you may see clearly. Many offices now have an auto-refractor which can automatically determine your lens prescription. While contact lenses and/or glasses are a common method of correcting your refractive error, laser vision correction (LASIK) can be an alternative and provide a long term solution as well. What’s your refractive error? We look forward to seeing you. Please call us (609.877.2800) for an appointment.
Which part of the brain is associated with feelings of empathy? A. Frontal lobes B. Inferior parietal lobe C. Left hemisphere D. Right hemisphere Which of the following is NOT a characteristic of frontal lobe damage? A. Difficulty in logical reasoning C. Tendency toward apathy D. Loss of motor function Studies found evidence of sex differences in all EXCEPT: A. antisocial tendencies. C. conduct disorders. Proponents of learning theory are best described by what term? According to learning theorists: A. behavior arises from moral beliefs. B. moral behavior arises through reason. C. moral beliefs arise from a habit of moral behavior that is the product of reinforcement. D. negative reinforcement is the most effective way to teach moral belief. Which of the following is NOT associated with learning theory? A. Moral reasoning B. Cognitive dissonance Shermer believes that morality is a product of: D. negative reinforcement. Who of the following is NOT associated with developmental theories? According to Kohlberg, which is the first stage of moral development? A. Punishment and obedience orientation B. Instrument and relativity orientation C. Social contract orientation D. Law and order orientation According to Kohlberg, there are __________ stages of moral development. Which ethical system is most consistent with a Marxist theory of distributive justice? A. Ethics of Virtue B. Ethical Formalism C. Ethics of Care Which type of justice is concerned with business dealings? Which type of justice is most closely associated with discussions of criminal law? Which theory of distributive justice would be most opposed to government involvement in the distribution of goods? Which of the following is NOT an aspect of Rawls’s theory of justice? A. Utilitarian principles B. Concern for rights C. Freedom from government interference D. Concern for the least advantaged members of society Rawls’s veil of ignorance states that: A. welfare should be given to the least advantaged or ignorant in our society. B. one must be ignorant of his or her own position in society in order to make just decisions. C. ignorance results in unfairness. D. because we are ignorant of God’s plan, equal distribution of goods is the most just. Which of the following is NOT an aspect of Sterba’s distribution system? A. Principle of Minimal Contribution B. Principle of Saving C. Principle of Need D. Principle of Transaction Retributive justice is best described by what term? Deterrence is the central theme of what theory of corrective justice? The mediator between people’s essential selfishness and generosity is:
The human body consists of more than 200 different types of cells, but all cells contain the same DNA code. How is it possible that the same code tells the various types of cells to be different? The information for life is not only coded in the DNA, but also on the DNA. Chemical attachments, which are called DNA methylation, can “turn off” parts of the DNA code that are not needed in a specific cell. DNA methylation can also be influenced by our surroundings. Sometimes, changes in DNA methylation might lead to diseases. Understanding how our surroundings can influence DNA methylation might also help us to better understand the mechanisms causing some diseases and hopefully get better at curing and preventing them. This article describes ADHD as one example. Same Code, Different Cells Our bodies consist of different types of cells. Our heart cells look different from our bone cells, and they do different things. But all these cells contain exactly the same DNA code: the same long molecules, functioning as the “manual” of the cell. How, then, do all these cells look different from each other and have their own functions, when their manuals do not differ? Our cells are like tiny factories that produce the proteins necessary for the cell to function. All cells might get the same manual, but they use different pages from the manual, called genes, to produce their specific proteins. For example, a heart cell needs to contribute to building the heart and help it pump blood through the body. But bone cells need to make strong and rigid bones. Heart cells and bone cells need different proteins, so they must use different genes. How Does the Factory Know What It Needs to Make? The information for life is not only coded in the DNA, but also by chemicals added onto the DNA. One way the cell “turns off” certain genes is via a process called DNA methylation: a small chemical group called a methyl group is attached to the DNA. This alteration of the gene’s structure makes the code less accessible, so the cell produces less of the protein that is coded for by that gene (Figure 1). You can compare this to the levers in the cake factory in Figure 2A, in which one lever has been switched to a different mode: make no more candles for the cakes! Surroundings Affect DNA Methylation DNA methylation does not only determine whether a cell is going to be a heart cell or a bone cell. The amounts of proteins cells produce affect how the body functions and can also influence how we behave. Our surroundings, including our living environments and the people we interact with, can influence the methylation of the DNA and the production of certain proteins. This is very convenient, because sometimes we need to adapt to changing surroundings, for example, if we move to a new country with a different climate and different food. You could compare this to the cake factory example: citizens of the city in which our cake factory is situated love to eat strawberry-coated cakes, so the factory makes fewer chocolate-coated cakes. But when strawberry season is over, the factory decides to make more chocolate-coated cakes. It does so by switching the lever that controls the production of chocolate (Figure 2B). The lever can also control the speed of chocolate production—it can be set to the fastest speed possible, completely switched off, or be anywhere in between these two extremes. Which Surroundings Influence DNA Methylation? Your surroundings include anything that influences your body from the outside. One example of the effect of surroundings on DNA methylation is smoking. Researchers have found that people who smoke have different DNA methylation across the whole DNA code than non-smokers do . When a mother smokes during pregnancy, even the baby’s DNA methylation will be altered by the mother’s smoking, and many of these alterations last until at least the age of 7 years . In another example of the effects of surroundings even before birth, if a mother experiences really high levels of stress during pregnancy, her child will show altered DNA methylation of stress-related genes . These alterations in DNA methylation prepare the child for a harsh environment. However, such changes may also make the child more vulnerable to some diseases and disorders, like allergies or depression. DNA Methylation and ADHD? So far, we have described how DNA methylation can determine which proteins a cell produces, and therefore its function. We also described that DNA methylation can happen in response to a person’s surroundings. As researchers, we are interested in DNA methylation and attention-deficit/hyperactivity disorder (ADHD). One in twenty kids experiences hyperactivity, impulsivity, and/or inattention, which are typical symptoms of ADHD. Some children with ADHD no longer show such problems when they are grown up. We call their type of ADHD remittent ADHD. However, over half of the children with ADHD still experience symptoms during adulthood. They have what we call persistent ADHD. It is important to know what the differences are between people who have remittent ADHD and those who have persistent ADHD. Understanding the differences might allow us to help people overcome ADHD, however, we do not yet understand what these differences are. So far, researchers have only looked at genes themselves to find the differences. We are now starting to investigate whether DNA methylation might be playing a role in ADHD. How Do We Study DNA Methylation in ADHD? We asked more than 50 12-year-old children with ADHD to come to our laboratory. Five and 9 years later, when the children were age 17 and 21, we invited them again and checked whether they still had ADHD . In this way, we could determine whether their ADHD was remittent or persistent. At the third visit, we also drew blood from the participants. We isolated the DNA from the participants’ blood, to see where each individual’s DNA was methylated (Figure 3). Then we compared the DNA methylation of all participants with persistent ADHD to the DNA methylation of all participants with remittent ADHD. We found more methylation on a specific gene in participants with persistent ADHD. We think that, as a result, people with persistent ADHD make less of a protein called APOB. APOB’s job is to carry cholesterol throughout the body. Perhaps you have heard of cholesterol as a “bad” substance, found in some unhealthy foods. But cholesterol is also important for the development and maintenance of brain cells, so the body needs it in small amounts. We think that lower production of APOB and the resulting effects on cholesterol might help to explain why some individuals have persistent ADHD . Importantly, if we had looked at the DNA code instead of at DNA methylation, we would not have found any differences in the APOB gene between the two groups of study participants. Research like ours shows that DNA methylation is very important for the development and functioning of the body—ADHD is only one example. Our study helped us to unravel one piece of the puzzle of why some individuals have persistent ADHD and others do not. These are exciting times for researchers, because we are learning more and more about the role of DNA methylation in health and disease, and about aspects of our surroundings that influence DNA methylation. There is also still a lot to learn and discover about DNA methylation and hopefully what we learn will help us to improve the life of people with methylation-related disorders. DNA: ↑ Deoxyribonucleotide, a molecule that contains all the information a cell needs to know to stay alive and perform its functions. The DNA is subdivided into genes. Gene: ↑ A section of DNA containing information on how to make a certain protein. Cells in the body need many proteins to function properly. DNA Methylation: ↑ Addition of a chemical called a methyl group to DNA, resulting in a change to the DNA’s structure that makes it more difficult to make proteins from the methylated gene. ADHD: ↑ Attention-deficit/hyperactivity disorder, a disorder that causes people to experience difficulties concentrating and/or hyperactivity and impulsivity. Remittent ADHD: ↑ Some people have ADHD when they are kids, but during adulthood they do not show any symptoms any more. Persistent ADHD: ↑ Some people with ADHD show symptoms when they are kids. When they are adults they still have the disorder and show the symptoms. Conflict of Interest BF received educational speaking fees from Medice. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Original Source Article ↑Meijer, M., Klein, M., Hannon, E., van der Meer, D., Hartman, C., Oosterlaan, J., et al. 2020. Genome-wide DNA methylation patterns in persistent attention-deficit/hyperactivity disorder and in association with impulsive and callous traits. Front. Genet. 11:16. doi: 10.3389/fgene.2020.00016 ↑ Zeilinger, S., Kuhnel, B., Klopp, N., Baurecht, H., Kleinschmidt, A., Gieger, C., et al. 2013. Tobacco smoking leads to extensive genome-wide changes in DNA methylation. PLoS ONE 8:e63812. doi: 10.1371/journal.pone.0063812 ↑ Joubert, B. R., Felix, J. F., Yousefi, P., Bakulski, K. M., Just, A. C., Breton, C., et al. 2016. DNA methylation in newborns and maternal smoking in pregnancy: genome-wide consortium meta-analysis. Am. J. Hum. Genet. 98:680–96. doi: 10.1016/j.ajhg.2016.02.019 ↑ Mitchell, C., Schneper, L. M., and Notterman, D. A. 2016. DNA methylation, early life environment, and health outcomes. Pediatr. Res. 79:212–9. doi: 10.1038/pr.2015.193 ↑ von Rhein, D., Mennes, M., van Ewijk, H., Groenman, A. P., Zwiers, M. P., Oosterlaan, J., et al. 2015. The NeuroIMAGE study: a prospective phenotypic, cognitive, genetic and MRI study in children with attention-deficit/hyperactivity disorder. Design and descriptives. Eur. Child Adolesc. Psychiatry 24:265–81. doi: 10.1007/s00787-014-0573-4 ↑ Meijer, M., Klein, M., Hannon, E., van der Meer, D., Hartman, C., Oosterlaan, J., et al. 2020. Genome-wide DNA methylation patterns in persistent attention-deficit/hyperactivity disorder and in association with impulsive and callous traits. Front. Genet. 11:16. doi: 10.3389/fgene.2020.00016
As a complex, lifelong neurological disorder with no medical treatment, receiving an Autism Spectrum Disorder (ASD) diagnosis for your child can be quite overwhelming. However, therapeutic interventions have shown promising results in helping children with ASD and their families cope with the challenges presented by this disorder. One of the most powerful interventions in this context is Applied Behavior Analysis (ABA) therapy. First introduced in the early 1960s, ABA has evolved to represent a group of techniques that use positive reinforcement and rewards for treating individuals with Autism. In fact, various evidence-based studies have indicated that early diagnosis and intensive behavioral interventions can help your child overcome their ASD diagnosis. The expert ABA practitioners at AB Spectrum believe that techniques and programs based on the ABA principles can potentially alter the course of your child’s life. Read on to learn what makes ABA therapy effective. As a pervasive developmental disorder, ASD can severely impact your child’s social interactions and communications. Most children also tend to show high rates of non-functional interests or interfering behaviors. While neurotypical children may learn from the environment through play or conversation, children with ASD have limited skills and inclination to learn in this manner. Moreover, their inability to communicate effectively could lead to frustration, tantrums, and other problem behaviors. This is where Applied Behavior Analysis (ABA) therapy comes in. ABA aims to reduce challenging behaviors and promote your child’s function and independence. Involving individualized treatment plans, ABA therapy helps in: Developing your child’s motor skills, social skills, communication, and verbal skills Improving their cognitive abilities, emotional maturity, and school readiness skills Enhancing their self-care abilities, such as grooming, getting dressed, sleeping through the night, toileting, and so on. Teaching the child to recognize, monitor, and control the stereotypical or challenging behaviors. Most Autism treatment experts use three different ABA-based techniques or interventions Discrete Trial Training (DTT): This is about breaking down every task or activity into its simplest parts and teaching the child to carry out each step. It involves positive reinforcements for every attempt and repetition of the steps until the child achieves the desired behaviors or outcomes. DTT techniques are effective in increasing the child’s motivation and abilities to learn because: Each trial or session in DTT is short and does not add unnecessary strain on the child’s learning abilities or attention span DTT features a very precise format with a distinct beginning and end, which clarifies the teaching situation for the child DTT is versatile and flexible, allowing the therapist to customize the sessions to the child’s unique developmental goals Receptive Language and Expressive Language: Many children with ASD may have little or no ability to respond to words or languages. Both, repetitive language and expressive language programs address this critical aspect that can potentially impact the child’s function and independence. Receptive language is about performing an action in response to a verbal cue. For example, picking up a doll when the therapist says “doll”. Expressive language is about giving a verbal response to a visual cue. For example, if the therapist holds up a doll, the child says “doll”. The use of receptive and expressive language interventions can drastically improve the child’s communication, language, and social skills, which in turn can reduce frustrations or negative behaviors and boost their self-confidence. Pivotal Response Training (PRT): Featuring a variety of behavioral interventions in a naturalistic teaching format, PRT helps in developing the child’s language, communication, sociability, or academic skills. PRT is highly beneficial to children with ASD because: Instead of individual behaviors, PRT techniques target the ‘pivotal’ areas of the child’s development, such as motivation, self-management, response to multiple cues, and the initiation of social interactions. Since the child learns the new skills in natural environments, it helps them generalize better and apply these skills in real life. While each intervention within ABA has its own advantages, here are some of the reasons why ABA therapy as a whole is an effective treatment for children with Autism[AK1] : Customizable Techniques: ABA therapy plans are geared towards every child’s unique needs. Based on your child’s preferred learning style, motivators, and triggers, the therapist can use different techniques for teaching different skills. Visible Results: Experts recommend having a long term approach to ABA therapy. Starting at an early age and undergoing intensive and structured therapy for an average of 30-40 hours per week is likely to produce the most effective results among children with Autism. No matter how long your child is in therapy, ABA sessions and techniques allow you to see the progression or regression in their abilities and behaviors almost immediately. Continuous Learning: With active participation, inputs, and observation, parents, teachers, and caregivers of children with ASD can learn and use the principles of ABA in real-life situations, beyond formal therapy. This creates an environment of continuous and repetitive learning through individualized techniques that are best suited to your child. Essentially, practicing ABA therapy at home or in day-to-day living can help in minimizing interfering behaviors and reinforcing desirable behaviors and skills in your child with Autism. AB spectrum is the world’s first ReggioABATM clinic, which features a child-led curriculum and a “Therapy through Play” approach. We specialize in the use of the principles of ABA and follow the philosophy of Natural Environment Training (NET) to improve the quality of life of children with Autism. Choose from our clinic-based group ABA therapy options at St. Charles or Chesterfield, Missouri, or in-home ABA sessions in and around the St. Louis area of Missouri. Wherever required, we can design hybrid plans that include center-based as well as in-home therapy. Our team of licensed and experienced ABA therapists believes that ABA-based techniques and interventions offer some of the most effective results in children with Autism. We have several success stories where children have graduated from our ABA therapy programs and joined their neurotypical peers in school and other activities. For the best ABA therapy options in the St. Louis area of Missouri, count on the experts at AB Spectrum. Call 314.648.2687 to learn more about our ABA programs or book a no-fee consultation at one of our Autism treatment centers near you.
On 22 August, an astonishing discovery was reported. A sliver of bone from a cave in Russia turned out to belong to a hominin, one that was utterly unprecedented. Denny, as she is known to the scientists who are studying her, was a first-generation hybrid. Her mother was a Neanderthal and her father a Denisovan. She was a child of two species. The findings were reported in Nature. I covered this discovery, as did many others, and within hours the question arrived: "I thought the definition of a species was that they couldn't interbreed?" Many people seem to believe that animals belonging to different species cannot breed together, and that this is what defines a species. I suspect many of us acquire the idea in childhood when we learn about mules. The offspring of a horse and a donkey, a mule is a useful working animal but is entirely sterile and incapable of breeding. We all seem to generalise from this and assume that no interspecies pairings can produce fertile offspring. This is not just a piece of folk science. The biologist Ernst Mayr proposed in 1942 that a species is a population of organisms that can all interbreed with each other, and which either cannot or do not interbreed with anything else. This idea became known as the Biological Species Concept, and evidently many of us learn it as fact. The thing is, Mayr's idea is not accepted as the be-all-and-end-all by other biologists. Instead, the problem of how to define a species is still being argued about today, 76 years after Mayr published his definition. Let's come back to mules. They are not a terribly good example of what happens when two species interbreed. Horses have 64 chromosomes and donkeys 62, so when the two breed their mule offspring ends up with 63. Because this is an odd number, it's impossible for them to divide evenly into two. That means the mule cannot produce sperm and egg cells that carry exactly half the animal's chromosomes, as should happen. When these defective sex cells are fused with those of another mule, the resulting embryo is likely to have crucial chunks of its DNA missing, and will not be viable. However, many distinct species have the same numbers of chromosomes. For instance, all great apes (apart from humans) have a total of 48 chromosomes, arranged in 24 pairs. All else being equal, that means it ought to be easier for them to interbreed than it is for horses and donkeys. So it has proved. Chimpanzees and bonobos have interbred several times since their populations split a few million years ago, and the bonobo genome also carries DNA that seems to have come from a third, unidentified species. Other ape pairings don't seem to have happened, but that might be partly because they live in separate habitats and don't meet: orangutans are confined to Borneo and Sumatra, and are unlikely to encounter gorillas and chimpanzees from Africa. But the idea captivates people: there are long-standing (unsubstantiated) rumours of a chimpanzee-gorilla hybrid called the koolakamba or kooloo-kamba. Similarly, human evolution was rife with interspecies sex. Modern humans have interbred with both Neanderthals and Denisovans, Neanderthals and Denisovans interbred, and Denisovans interbred with an unidentified hominin. There is reason to suspect that the first-generation hybrids had some health issues, such as reduced fertility, but they were evidently able to get by well enough to leave descendants. Today many people carry some Neanderthal and/or Denisovan DNA. This illustrates the problem with Mayr's species concept: where do you draw the line? If two animals can produce offspring, but that offspring's fertility is reduced by 10 per cent, are the parents members of different species? What about a 20 per cent drop in fertility - or a 10 per cent drop in fertility combined with a 20 per cent reduction in average lifespan? We could insist that the offspring be 100 per cent infertile, but that would mean collapsing a lot of species that we currently think of as distinct, beginning with chimpanzees and bonobos. Insisting that no offspring are produced at all would destroy even more distinctions. Species are often separated, not by reproductive anatomy or courtship habit, but by geography - and those separations are reversible. In the lakes of the European Alps, pollution has caused oxygen levels to crash in the deeper waters, forcing the species that once lived there to move closer to the surface. There they have begun hybridising with longstanding surface-dwellers. These species had been separated for millions of years, but they weren't distinct enough to be unable to breed. In fact it has been estimated that 88 per cent of all fish species could hybridise with at least one other, given the opportunity. The same may be true of 55 per cent of all mammals. This hybridisation has a mixed environmental legacy. On the one hand, extinct species are not quite gone, because their DNA lives on. This is true of Neanderthals, and on Monday it emerged that it is also true of cave bears, whose DNA lives on in brown bears whose ancestors mated with the cave bears. Many of us would see that preservation as being somehow good. But on the flip side hybridisation can also destroy species if two distinct groups breed so much that they blur together. This is what has happened to many of the fish in the Alpine lakes, and it may be the fate of polar bears if they are driven south by melting ice and begin interbreeding with other bears in a big way. The lesson is that we should not become too wedded to concepts that we ourselves created. The idea of a "species" is a human construct, and while it's useful it doesn't map neatly onto nature. In this respect it is like the concept of "life", which most of us intuitively understand but would struggle to define. Or consider this philosophical passage from science fiction writer H. G. Wells: "Take the word chair. When one says chair, one thinks vaguely of an average chair. But collect individual instances, think of armchairs and reading chairs, and dining-room chairs and kitchen chairs, chairs that pass into benches, chairs that cross the boundary and become settees, dentists’ chairs, thrones, opera stalls, seats of all sorts, those miraculous fungoid growths that cumber the floor of the Arts and Crafts Exhibition, and you will perceive what a lax bundle in fact is this simple straightforward term. In co-operation with an intelligent joiner I would undertake to defeat any definition of chair or chairishness that you gave me." Other human concepts can be more tightly defined and delineated, but they're normally found in physics, not biology. There is no blurry dividing line between an up quark and a down quark, but there really is a halfway house between a horse and a donkey. Finally, here is a truly exasperating fact. Once in a blue moon, mules do reproduce.
A confident stand on the Limitations of a person’s capability to fully know about their existence is a defining step in the maturity of a human being; as to be self-aware of those limitations gives person grounds, not only to improve in their understanding, but also their perspective as well. Knowledge begins when enquiry begins; to ask leads to finding answers, by which one uses methodologies (ways of finding out the answers to questions they have) to systematically approach the problems they have. These methodologies are often Socratic and scientific in nature. Methods by which one finds answers grows one’s understanding, and further moves those limitations away; moving the boundaries of ignorance away from their position of knowledge. Though ignorance may still be looming just beyond one’s vision, and may still be there, the individual’s knowledge is in its best ability to counter it. However, to expand upon the boundaries of one’s knowledge they need to understand what opposes them: they need to take into account what ignorance is. Ignorance means, “lack of knowledge” or the “absence of knowledge”. Knowledge is defined as: The expansion of enlightenment, achieved through the process of learning and discovering, which seeks to allow or increase one’s ability of thought and understanding. Long winded in nature, it still gives one prominence to understand the nature of thought, and its enquiry. Socrates was the first philosopher, who is best known for the development of the ‘ask and seek’ methodology now known as the Socratic Method. Socrates employed this method in response to the sophist leaders of the day – men, often in the high positions in the Greek government, who were trained in rhetorical tactics, such as persuasion and oration. Sophists were religious leaders – the ‘go-to-guys’ – the individuals that were the arbiters of knowledge. They were the individuals that Greek society trusted for leadership, knowledge and power. The Socratic Method was developed in response to these sophists; sophists who withheld knowledge from the general public. Socrates started with the acknowledgements of his un-knowingness – the acceptance of his own ignorance. He loved wisdom, and he sought to ‘question everything’, including that of authority. The Socratic Method became popular in the 4th century [B.C.E] among the youth in Athens, who at that time had set lives, either becoming a soldier or a scholar; well females became wives and the cattle of the day. Socrates developed a following, which initially consisted of his pupils Plato and Xenophon, but grew slowly over the following two years; these followers became renowned future philosophers, who would go onto advance upon Socrates’ teachings. However, when the sophists got wind of Socrates’ following, in the high courts of the Greek government, their response was swift and brutal. The sophist leaders outlawed meetings and arrested Socrates. It was in 399 B.C.E, and under the captivity of the Greek authorities, that Socrates was placed on Trial; it was to be the greatest trial in Ancient Greek history. Surrounded by his fellow pupils and before the Greek courts, Socrates laid witness to a barrage of accusations against him. In those days Greek courts were performance halls, where spectators would vote upon the guiltiness of the accused. Meletus, one of the accusers, had laid the charge of impiety and the corruption of the youth against Socrates. Socrates, having only sought to promote free-thinking, scepticism and wisdom fought valiantly against the charges laid against him. Socrates had only informed the people of Athens to think critically, and to question their authority figures, who he believed were leading the people of Athens down the path of destruction. Plato records the trial in his book, Apology. The result of the trial, though valiantly defended by Socrates, resulted in 56% of the jury voting against Socrates favour. Socrates was thus given two options: The first was to renounce his teachings and go into hiding; the second was the death Penalty. Socrates chose the latter of the two options. The Greek Soldiers took Socrates before the Sophists; in a large imperial like manner, the guards thrust Socrates before the court room floor (Phaedo is a Platonic piece of work outlining all the occurrences that Plato witnessed). The sophists wished for Socrates to renounce his teachings before the court, and go off into exile in the outskirts of Athens as a disgrace. Socrates sought not to allow them to get the final word, and instead said before the court a long speech that highlighted the importance of learning and the future of the state. He said that he would rather “die” then give up what he thought to be right and Just. So the sophists allowed him the option to take his life…and he did. Socrates ate Conium, a flower that caused death upon digestion. Before the court and his followers, he gave up his life: the true sense of strength. After Socrates had given his life, the Greek government began to outlaw his teachings; they cast laws against gatherings and so forth. The story of Socrates tells us as people the value of what it means to stand up against authority; it tells us as people what it means to die for what one knows to be right and just. Socrates example showed the sheer passion he had for everyone, as he wanted to show the value of wisdom, enquiry, and justice: true justice. He taught that when one expresses their ideas, they should be prepared to defend them; for true character is shown in those that can defend their ideas despite the opposition they are posed. Socrates embodied this to its bitter conclusion; He chose death when he could have had life. Socrates death is a statement to the power of human determination, and to the power of the human condition. Socrates death is also a warning for us to not be contempt in our knowledge; to seek out new-found knowledge through enquiry, all in the betterment of ourselves and the world we choose to live in. Knowledge is Power. This letter I write to you now. Written By: Anthony Avice Du Buisson (23/02/2014)
For each type of source in this guide, both the general form and an example will be provided. The following format will be used: In-Text Citation (Paraphrase) - entry that appears in the body of your paper when you express the ideas of a researcher or author using your own words. For more tips on paraphrasing check out The OWL at Purdue. In-Text Citation (Quotation) - entry that appears in the body of your paper after a direct quote. References - entry that appears at the end of your paper. Information on citing and several of the examples were drawn from the Publication Manual of the American Psychological Association (7th ed.). The general format below refers to a book with two authors. If you are dealing with two editors instead of two authors, you would simply insert the names of the editors into the place where the authors' names are now, followed by "(Eds.)" without the quotation marks (see the Example). The rest of the format would remain the same.
In the Greek. It is in this language where we can find the etymological origin of the colossal term that we are now going to unravel and more exactly in the word kolossos which is formed by the union of the prefix kolos which means “big” and the word ossos which can be translated as “eyes ”. In this way, it can literally be said that this concept meant “great in sight.” Hence, the term was used in ancient times by the Greeks to refer to the gigantic sculptures that the Egyptians made at the time. But not only that, they also used the same name for a large sculpture that they decided to build in Rhodes. The word colossal belongs to the set of adjectives and describes what belongs to or is linked to the colossus (a work of art whose proportions exceed the ordinary). The term, therefore, is often used to refer to something extraordinary or something that is enormous. For example: “The government presented a colossal statue of the leader who died a hundred years ago”, “You are in a colossal mess: I recommend you be very careful”, “The young Uruguayan achieved a colossal triumph over the number one in the world. “ The famous Colossus of Rhodes is possibly the most popular colossal statue in history. It consisted of a striking representation of the deity Helios and was created in 292 BC on the Greek island of Rhodes by Cares of Lindos. The work was destroyed by an earthquake in 226 BC. The information that reached the present day was developed by ancient writers such as Pliny the Elder and Michael the Sirius. It is said that the Colossus of Rhodes measured about 32 meters and was supported on a base of 15 meters. The statue was built with bronze plates and an iron frame, while the base was made of marble. Among the colossal statues that can still be seen, the Colossi of Memnon, located on the banks of the Nile River, stand out. They are two stone statues dedicated to Pharaoh Amenhotep III. According to DigoPaul, the adjective colossal, however, can refer both to physical things and to symbolic or abstract matters. A colossal building is a large building, while a colossal achievement is a relevant success that transcends borders. And all this without forgetting that in addition to the underlined meanings, the term colossal also works in the colloquial sphere to refer to any person, object or situation that is considered to be very good, optimal and favorable. Likewise, this concept is also part of an expression used in the field of science. We are referring to what is known as colossal magneto resistance, which determines the capacity of some materials, such as oxides, to be able to significantly change their electrical resistance. In this sense, it should be emphasized that this modification is carried out when they are subjected to the presence of a specific magnetic field. In the decade of the 90’s, and more specifically in 1993, it was when this property was discovered by the scientist von Helmolt.
Rocks, water and hot alkaline fluid rich in hydrogen gas spewing out of deep-sea vents: this recipe for life has been championed for years by a small group of scientists. Now two of them have fleshed out the detail on how the first cells might have evolved in these vents, and escaped their deep sea lair. Nick Lane at University College London and Bill Martin at the University of Düsseldorf in Germany think the answer to how life emerged lies in the origin of cellular ion pumps, proteins that regulate the flow of ions across the cell's membrane, the barrier that separates it from the outside world. Their hypothesis is published today in Cell. Life in the rocks In all cells today, an enzyme called ATP synthase uses the energy from the flow of ions across membranes to produce the universal energy-storage molecule ATP. This essential process depends in turn on ion-pumping proteins that generate these gradients. But this creates a chicken-and-egg problem: cells store energy by means of proteins that make ion gradients, but it takes energy to make the proteins in the first place. Lane and Martin argue1 that hydrogen-saturated alkaline water meeting acidic oceanic water at underwater vents would produce a natural proton gradient across thin mineral 'walls' in rocks that are rich in catalytic iron–sulphur minerals. This set-up could create the right conditions for converting carbon dioxide and hydrogen into organic carbon-containing molecules, which can then react with each other to form the building blocks of life such as nucleotides and amino acids. The rocks of deep-sea thermal vents contain labyrinths of these tiny thin-walled pores, which could have acted as 'proto-cells', both producing a proton gradient and concentrating the simple organic molecules formed, thus enabling them eventually to generate complex proteins and the nucleic acid RNA. These proto-cells were the first life-forms, claim Lane and Martin. It is assumed that the rocky proto-cells would initially be lined with leaky organic membranes. If the cells were to escape the vents and become free-living in the ocean, these membranes would have to be sealed. But sealing the membrane would cut off natural proton gradients, because although an ATP synthase would let protons into the cell, there would be nothing to pump them out, and the concentration of protons on each side of the membrane would rapidly equalize. Without an ion gradient “they would lose power,” says Lane. Proteins that pump protons out of the cell would solve the problem, but there would have been no pressure for such proteins to evolve until after the membranes were closed. In which case, “They would have had to evolve a proton pumping system in no time, which is impossible,” says Lane. Lane and Martin think that proto-cells escaped this dilemma because they evolved a sodium-proton antiporter — a simple protein that uses the influx of protons to pump sodium ions out of the cell. As the proto-cell membranes started closing up, they became impermeable to the large sodium ions before the smaller protons. This would have provided advantages to cells that evolved a sodium-pumping protein, while they could still rely on the vents’ natural proton gradients to generate energy. The antiporters created sodium gradients as well, and when the membrane closed up completely, the cells could run on the sodium gradient, and be free to leave the vent. Lane and Martin drew inspiration for their hypothesis from bacteria and archaea that live in these extreme environments today. “Their biochemistry seems to emerge seamlessly from the conditions in vents,” says Lane. These microbes use iron–sulphur-containing proteins to convert hydrogen and carbon dioxide into organic molecules. They rely on sodium–proton antiporters to generate ion gradients, and their membrane proteins, such as the ATP synthase, are compatible with gradients of sodium ions or protons. Wolfgang Nitschke, a biochemist at the French National Center for Scientific Research in Marseille, praises the duo for using knowledge of modern microbes to produce detailed scenarios for the origin of life. “In stark contrast to basically all other origin-of-life hypotheses, research in the framework of the alkaline-vent scenario is empirical,” he says. “It is an outstanding paper.” - Journal name:
Students will develop strategies for subtracting a decimal from a whole number by using the Number Line application to find strategies. Before the Activity Download the attached PDF and read through the activity. You may wish to distribute hardcopies of the PDF for student use. During the Activity Discuss the material from the activity pages as needed. After the Activity Encourage students to summarize what they have learned from completing the activity.
NASA has developed a partnership with the U.S. Geological Survey, National Park Service, U.S. Fish and Wildlife Service, and Smithsonian Institution to begin new research efforts to bring the overall view of our climate from space satellites down to Earth to benefit our wildlife and key ecosystems. Observations of our planet’s climate from NASA’s Earth-observing satellites will help us better understand how different species and ecosystems respond to climate changes. These observations will also allow us to further develop tools to manage wildlife and natural resources. Link to the NES module, Earth Climate Course (requires log-in) Link to NASA Now: A-Train: Monitoring the Earth System (requires log-in) Link to NASA’s new partnership announcement. Link to the NES Virtual Campus home page.
Aside from being hailed as the most populated country and one of the largest countries in the world, China is also very well-known as the land of many languages and dialects. As listed by Ethnologue, China has almost 300 living languages and local dialects. Originally, Chinese language is part of the Sino-Tibetan language family and is believed to be from the “Proto-Sino-Tibetan”. From 221 B.C. when China grew into a nation state until in 1912 during the end of its last imperial dynasty, China never had introduced a common national language. Before Mandarin came to life, China already embodied a wide variety of dialects. The Rhyme books written during the Southern and Northern Dynasties, Confucius using yǎyán (雅言), or “elegant speech”, rather than colloquial regional dialects and Han Dynasty text called tōngyǔ (通語), or “common language” were evidences of the variations. Perhaps, pronunciations were also distinctive among the most educated people. The only thing that was common and unify the Chinese language of old times was there written language. The present Chinese language varieties developed out of the different ways in which dialects of Old Chinese and Middle Chinese evolved. “Old Chinese” was common during the Zhou Dynasty (1122 BC – 256 BC) while “Middle Chinese” was during Sui, Tang and Song Dynasties (6th – 10th centuries). In 1368 to 1912, officials of the two famous dynasties, Ming Dynasty (1368–1644) and Qing Dynasty (1644–1912), varied widely in their pronunciation. During the reign of these dynasties was also the start of Guānhuà (官話), or “official speech”. Guānhuà refers to the speech used at the courts. Historically, and properly speaking, the term “Mandarin” (官話) denotes to the language spoken in the 19th century by the upper classes of Beijing as well as by the higher civil servants and military officers of the imperial regime serving in Beijing or in the provinces. “Mandarin” is an English term also from Portuguese, meaning official of the Chinese empire. As their home dialects were varied and often mutually unintelligible, these officials communicated using a koiné based on various northern dialects. When Jesuit missionaries learned this standard language in the 16th century, they called it Mandarin, from its Chinese name Guānhuà(官话/官話), or “language of the officials”. When Beijing became the Capital of China in lieu to Nanjing in the latter part of Ming Dynasty until the Qing Dynasty, Nanjing dialect was the basis for standards. Later in 20th century, the position of Nanjing Mandarin was considered higher than that of Beijing by some and the Chinese Postal Map Romanization standards set in 1906 included spellings with elements of Nanjing pronunciation. Yet, by 1909, the falling Qing Dynasty had recognized the Beijing dialect as Guóyǔ (国语/國語), or the “national language”. Guóyǔ is the other name of Mandarin. The Republic of China was established in 1912. This growth also initiated a more successful campaign in having a common national language. In February 1913, the Republic of China (中華民國) organized a “Commission on the Unification of Pronunciation” (讀音統一會) in Beijing in order to foster a phonetic system and national language for China. After years of widespread research and debate, the Commission adopted the Zhuyin alphabet as China’s official alphabet in 1918. The Commission then turned to the task of standardizing the language that the new Zhuyin alphabet would represent. In 1920, the Commission published a Dictionary of National Pronunciation (國音字典) that adopted a modification of Beijing’s phonology. Mandarin was not modelled after the actual speech of the majority of real Beijing residents, but rather the way a hypothetical educated Beijing person would speak, as imagined by Mandarin’s creators. After much heated discussions between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The government of the People’s Republic of China, established in 1949, continued the effort. In 1955, Guóyǔ was changed to Pǔtōnghuà (普通話), or “common speech”. In 1982, the People’s Republic of China (中華人民共和國) amended their constitution making Mandarin the official language of China. As Mandarin was established officially in 1932, the proponent’s aim was to make it as the bind for unification in language of the Chinese people after a century. At present, approximately 70% of Chinese people speak Mandarin fluently.
Ischiopagi comes from the Greek word ischio- meaning hip (ilium) and -pagus meaning fixed or united. It is the medical term used for conjoined twins (Class V) who are united at the pelvis. The twins are classically joined with the vertebral axis at 180°. However, the most frequent cases usually structures the ischiopagus twins with two separate spines forming a lateral angle smaller than 90°. The conjoined twins usually have four arms; two, three or four legs; and typically one external genitalia and anus. It is mostly confused with pygopagus where the twins are joined ventrally at the buttocks facing away from each other, whereas ischiopagus twins are joined dorsally at the sacrum and coccyx. Parapagus is also similar to ischiopagus; however, parapagus twins are joined side-by-side whereas ischiopagus twins typically have spines connected at a 180° angle, facing away from one another. Ischiopagus Dipus: This is the rarest variety with the twins sharing two legs with no lower extremities on one side. Ischiopagus Tripus: This is the most common variety. These twins share three legs, the third leg is often two fused legs, or is non-functioning. The twins also usually share only one set of external genitalia. Ischiopagus Tetrapus/Quadripus: This variety has the twins at a symmetrical continuous longitudinal axis with their area of union not broken anteriorly. The axes extends in a straight line but in opposite directions. The lower extremities are oriented at right angles to the axes of the thorax and the adjacent limbs near the union of the ischium belong to the opposite twin. During embryonic development, twins can form from the splitting of a single embryo (monozygotic) which forms identical twins or the twins can arise from separate oocytes in the same menstrual cycle (dizygotic) which forms fraternal twins. Although the latter is more frequent, monozygotic is the reason conjoined twins can develop. In monozygotic twinning for conjoined twins such as ischiopagi, the twins form by the splitting of a bi-laminar embryonic disc after the formation of the inner cell masses. Thus, making the twins occupy the same amnion which can lead to a conjoining of the twins as a result of the twins not separating properly during the twinning process. Separation occurring between the seventh and thirteenth days should result in a monochorionic, monoamniotic identical twins sharing a yolk sac. If separation of the twins occur in the later stages of development prior to the appearance of the primitive streak and axial orientation, then it can be predicted that conjoined twins will develop. The origin of exactly what goes wrong to produce ischiopagus or any conjoined twin is a result by either incomplete fission or double overlapping inducing centers on the same germ disc. Various studies suggest that mechanical disturbances such as shaking of the blastomeres, exposure of the embryo to cold or insufficient oxygen during the early process of cleavage, grafting organizer onto gastrula or half a gastrula together, or constricting the blastula or early gastrula can cause the incomplete separation of monozygotic twins. However, studies have shown that these disturbances must happen at critical times in the pregnancy for the conjoined twins to develop. Conjoined twins are at high risk to being stillborn or dying shortly after birth. In some cases, a healthy twin and a parasitic twin are born. The parasitic twin has no hope for survival and dies and is then surgically separated from its twin. Depending upon how the twins are attached and what is shared among them, complications can arise from surgically separating the live twin from the dead twin. In Ischiopagus cases, the children share a pelvic region along with the gastrointestinal tract and genital region. Most Ischiopagus twins will need reconstruction surgery of the genitals and the gastrointestinal tract so that the twin will be able to perform normal bodily functions. For the Ischiopagus twins that both survive from birth, complications and risks also arise. Usually if both twins survive labor, one twin will be healthy and strong, while the other is malnourished and weak. Thus, surgery would have to be planned in advance to understand the best option and how to keep both children alive during surgery as well as afterwards. Separation is the only treatment for Ischiopagus. The rarity of the condition as well as the challenge it presents in separating the twins has been difficult to understand. In recent years, with advancing medical technology, physicians have been able to successfully separate Ischiopagus twins. However, it depends on the organs shared, how closely joined the twins are, and what risks could rise from separating the twins during surgery. Since Ischiopagus twins usually share a gastrointestinal tract and other organs in the pelvic region, it takes months of planning to decide whether or not separation of the twins outweighs the complications and risks associated with surgery and reconstruction of organs. Surgery to separate conjoined twins has allowed surgeons to be able to study the mechanisms of embryogenesis as well as the physiological consequences of parabiosis. Depending upon what organs are shared among the twins and if they are surgically separable, usually only one of the twins makes it through the surgery or they both die due to complications either before or during the surgery. Now that successful surgery has been reported and more findings are becoming available due to research and pre-surgery evaluation, better surgery techniques and procedures will be available in the future to help increase the survival rate of ischiopagus twins as well as other conjoined twins. Ischiopagus is a rare anomaly occurring in about 1 in every 100,000 live births and occurring in 1 out of 10 conjoined twin births. Most Ischiopagus cases are common in the areas of India and Africa. Of the varieties of Ischiopagus twins, Ischiopagus Tetrapus is more prevalent, happening in 68.75% of all Ischiopagus cases. Ischiopagus Tripus occurs in 31.25% of cases while Ischiopagus Dipus occurs in only 6.25% of all Ischiopagus cases. - Duplicata incompleta, dicephalus dipus dibrachius Archived 2009-08-19 at the Wayback Machine, 2008-06-20 - "Archived copy". Archived from the original on February 23, 2014. Retrieved February 10, 2014. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link) - Spencer, Rowena (2003). Conjoined Twins: Developmental Malformations and Clinical Implications. JHU Press. pp. 184–193. ISBN 9780801870705. - Eades, Joseph; Colin Thomas (17 March 1966). "Successful Separation of Ischiopagus Tetrapus Conjoined Twins". Annals of Surgery. 164 (6): 1059–1072. doi:10.1097/00000658-196612000-00017. PMC 1477223. PMID 5926243. - Spencer, Rowena (3 January 1992). "Conjoined Twins: Theoretical Embryologic Basis". Teratology. 45 (6): 591–602. doi:10.1002/tera.1420450604. PMID 1412053. - Schoenwolf, Gary (2009). Larsen's Human Embryology. Elsevier Churchill Livingstone. pp. 179–182. ISBN 9780443068119. - Khan, Yousuf (10 March 2011). "Ischiopagus Tripus Conjoined Twins". APSP Journal of Case Reports. 2 (1): 5. PMC 3418005. PMID 22953272.
There's more than one Independence Day in the U.S. On June 19, 1865, General Gordon Granger rode into Galveston, Texas, and announced that slaves were now free. Since then, June 19 has been celebrated as Juneteenth across the nation. Here's what you should know about the historic event and celebration. 1. SLAVES HAD ALREADY BEEN EMANCIPATED—THEY JUST DIDN'T KNOW IT. The June 19 announcement came more than two and a half years after Abraham Lincoln issued the Emancipation Proclamation on January 1, 1863, so technically, from the Union's perspective, the 250,000 slaves in Texas were already free—but none of them were aware of it, and no one was in a rush to inform them. 2. THERE ARE MANY THEORIES AS TO WHY THE LAW WASN'T ENFORCED IN TEXAS. News traveled slowly back in those days—it took Confederate soldiers in western Texas more than two months to hear that Robert E. Lee had surrendered at Appomattox. Still, some have struggled to explain the 30-month gap between the proclamation and freedom, leading some to suspect that Texan slave owners purposely suppressed the announcement. Other theories include that the original messenger was murdered to prevent the information from being relayed or that the Federal government purposely delayed the announcement to Texas in order to get one more cotton harvest out of the slaves. But the real reason is probably that Lincoln's proclamation simply wasn't enforceable in the rebel states before the end of the war. 3. THE ANNOUNCEMENT ACTUALLY URGED FREED SLAVES TO STAY WITH THEIR FORMER OWNERS. General Order No. 3, as read by General Granger, said: "The people of Texas are informed that, in accordance with a proclamation from the Executive of the United States, all slaves are free. This involves an absolute equality of personal rights and rights of property between former masters and slaves, and the connection heretofore existing between them becomes that between employer and hired labor. The freedmen are advised to remain quietly at their present homes and work for wages. They are informed that they will not be allowed to collect at military posts and that they will not be supported in idleness either there or elsewhere." 4. WHAT FOLLOWED WAS KNOWN AS "THE SCATTER." Obviously, most former slaves weren't terribly interested in staying with the people who had enslaved them, even if pay was involved. In fact, some were leaving before Granger had finished making the announcement. What followed was called "the scatter," when droves of former slaves left the state to find family members or more welcoming accommodations in northern regions. 5. NOT ALL SLAVES WERE FREED INSTANTLY. Texas is a large state, and General Granger's order (and troops to enforce it) were slow to spread. According to historian James Smallwood, many enslavers deliberately suppressed the information until after the harvest, and some beyond that. In July 1867 there were two separate reports of slaves being freed, and one report of a Texas horse thief named Alex Simpson whose slaves were only freed after his hanging in 1868. 6. FREEDOM CREATED OTHER PROBLEMS. Despite the announcement, Texas slave owners weren't too eager to part with what they felt was their property. When legally freed slaves tried to leave, many of them were beaten, lynched, or murdered. "They would catch [freed slaves] swimming across [the] Sabine River and shoot them," a former slave named Susan Merritt recalled. 7. THERE WERE LIMITED OPTIONS FOR CELEBRATING. When freed slaves tried to celebrate the first anniversary of the announcement a year later, they were faced with a problem: Segregation laws were expanding rapidly, and there were no public places or parks they were permitted to use. So, in the 1870s, former slaves pooled together $800 and purchased 10 acres of land, which they deemed "Emancipation Park." It was the only public park and swimming pool in the Houston area that was open to African Americans until the 1950s. 8. JUNETEENTH CELEBRATIONS WANED FOR SEVERAL DECADES. It wasn't because people no longer wanted to celebrate freedom—but, as Slate so eloquently put it, "it's difficult to celebrate freedom when your life is defined by oppression on all sides." Juneteenth celebrations waned during the era of Jim Crow laws until the civil rights movement of the 1960s, when the Poor People's March planned by Martin Luther King Jr. was purposely scheduled to coincide with the date. The march brought Juneteenth back to the forefront, and when march participants took the celebrations back to their home states, the holiday was reborn. 9. TEXAS WAS THE FIRST STATE TO DECLARE JUNETEENTH A STATE HOLIDAY. Texas deemed the holiday worthy of statewide recognition in 1980, the first state to do so. 10. JUNETEENTH IS STILL NOT A FEDERAL HOLIDAY. Though most states now officially recognize Juneteenth, it's still not a national holiday. As a senator, Barack Obama co-sponsored legislation to make Juneteenth a national holiday, though it didn't pass then or while he was president. One supporter of the idea is 91-year-old Opal Lee—since 2016, Lee has been walking from state to state to draw attention to the cause. 11. THE JUNETEENTH FLAG IS FULL OF SYMBOLISM. Juneteenth flag designer L.J. Graf packed lots of meaning into her design. The colors red, white, and blue echo the American flag to symbolize that the slaves and their descendants were Americans. The star in the middle pays homage to Texas, while the bursting "new star" on the "horizon" of the red and blue fields represents a new freedom and a new people. 12. JUNETEENTH TRADITIONS VARY ACROSS THE U.S. As the tradition of Juneteenth spread across the U.S., different localities put different spins on celebrations. In southern states, the holiday is traditionally celebrated with oral histories and readings, "red soda water" or strawberry soda, and barbecues. Some states serve up Marcus Garvey salad with red, green, and black beans, in honor of the black nationalist. Rodeos have become part of the tradition in the southwest, while contests, concerts, and parades are a common theme across the country.
The Andes Meltdown With the collaboration and support of AirClim, the Environment and Natural Resources Organization (FARN, Argentina) has recently published a comprehensive brief report summarizing the main findings on the impacts of climate change in the Andean cryosphere and the subsequent consequences for societies and ecosystems. As 2019 marks the close of the hottest decade ever recorded and global emissions from fossil fuels hit yet another record high, climate change is affecting mountain regions at a faster rate than other terrestrial habitats. Worldwide, mountains are losing their ice and snow and the Andes are far from being the exception, representing one of the areas where this is happening at one of the most terrifying rates. The cryosphere – the frozen-water portion of the Earth system – provides a plethora of services for humanity and our planet’s natural ecosystems. Ice and snow play a crucial role in feedback and regulation of the Earth’s weather and climate while storing and supplying freshwater essential for people’s survival, healthy ecosystems, agriculture, hydropower, and economic activities. With an estimated number of 18,800 glaciers1, the Andes contain the largest glacierized area in the Southern hemisphere outside of Antarctica. Glacier runoff and seasonal snowmelt play a key role in freshwater supply for more than 85 million people living in the region, representing a critical contribution to Andean communities’ socioeconomic activities and their sustainability. Moreover, glacier melt acts as an important buffer during periods of drought, providing water to an extensive portion of the Tropical and Dry Andes. Sadly, the future of the cryosphere in these mountains is at stake: Climate change has positioned Andean glaciers among the fastest-retreating and largest contributors to sea level rise on Earth. Over recent decades, Andean glaciers have shrunk by up to 50 percent, a trend that is expected to accelerate. According to recent studies, between 2000 and 2018 the average ice mass loss rate in the Andes was 22.9 Gt per year2, which translates to an average loss of water equivalent to a four-storey building in 18 years. Low-altitude glaciers in the Tropical Andes are particularly sensitive to warming because of their small size, and many will likely disappear in the coming decades, affecting the water supply of millions of people. The Andean communities are already experiencing changes in hydrological regimes and water scarcity. Combined with a growing population, the climate emergency is putting unprecedented pressure on the existing water supply in metropolitan and rural areas of the Andes. Melting snow and glaciers are also increasingly exposing mountain communities to hazardous events such as glacial lake outburst floods (GLOFs) and landslides. Biodiversity will also be affected as glacier retreat could seriously affect unique Andean ecosystems such as the northern tropical páramos and high-altitude wetlands, where meltwater depletion is likely to cause them to shrink. The report stresses that the more heat-trapping gases we keep releasing into the atmosphere, the more severe the impacts from a melting cryosphere will become. Very few tropical glaciers will survive today’s 1.1°C of warming, and a great deal of the Southern Andes glaciers could resist 1.5°C. But most of them would disappear almost completely at 2°C. As the IPCC points out, every fraction of a degree matters and all choices we make now are critical for the future of the cryosphere. Therefore, it becomes a human imperative to deeply cut greenhouse gas (GHG) emissions in the next few years if we want to preserve the vital services ice and snow provide in high-mountain areas and downstream. Unfortunately – due to the GHGs that are already present in the atmosphere – the Andes are locked into increased warming. Andean countries thus face a serious need for more effective adaptation strategies, which should be planned and implemented incorporating both scientific and indigenous knowledge while engaging local communities. 1. Pfeffer, W. T. et al. The Randolph Glacier Inventory: a globally complete inventory of glaciers. J. Glaciol. 60, 537–552 (2014) 2. Dussaillant, I. et al. Two decades of glacier mass loss along the Andes. Nature Geoscience (2019). doi:10.1038/ s41561-019-0432-5 The full report is available both in Spanish and
All radiation from the sun, whether it is PAR light, UV or infrared, has an effect on the growth and production of crops. You can selectively manage it with coatings that increase the positive effects and inhibit the negative ones. Less than 15% of the sun’s radiation is visible to us. Coincidentally, this largely corresponds with the region of the spectrum that plants use for photosynthesis. For a long time, we focused exclusively on managing the visible part of light. However, over the past ten years, our understanding of the whole spectrum has increased considerably. Consequently, we can now produce coatings that manage a much larger area of the spectrum, yielding better results in terms of production and quality. Visible light (wavelength 400 – 700 nm) virtually corresponds with the region of the spectrum that plants use for photosynthesis. This is why it is called photosynthetically active radiation. In short: PAR. Not all colours result in equal levels of photosynthesis. Red is the most efficient colour, with efficiency decreasing as we move towards green, and peaking again with blue. This applies to individual leaves. Contrary to popular belief, at crop level, green light is actually just as efficient as blue. The colour of the light also controls the shape and development of the crop. This often concerns the ratio between the colours. If there is more red compared to far red, the crop will grow more compact. This is also the case if there is more blue light compared to green. In high doses, UV radiation (wavelength 280 – 400 nm) can suppress photosynthesis and cause visible damage to the crop. This is a very real danger in plastic greenhouses. UV-B (280-320 nm) is particularly responsible for this. However, UV can also have positive effects. It improves the colouring of ornamental crops. And, very importantly, it improves their resistance to diseases. All radiation above a wavelength of 700 nm is called infrared. The first part is the short wavelength radiation, also known as heat radiation or near infrared (NIR, up to 2500 nm). The transition between light and infrared is formed by far red, a colour that our eyes can just about see. Far red is very important to plants, due to among other reasons its effect in plant stretching. Although the plant does not use infrared for photosynthesis, it provides warmth to the plant. This can be highly desirable, but too much is never good. If the temperature of the plant increases too much, everything will go wrong: first it will stop photosynthesis, then irreparable damage will occur. Nowadays we can selectively control all parts of the solar spectrum. Optifuse allows a better spread and penetration into the crop and therefore a better use of light. Eclipse reduces the level of radiation across the board. This may be necessary where there is high radiation, although it is often better to reflect heat radiation with Transpar, which allows you to retain the useful PAR light while preventing excess heating.
In a new study on ocean circulation, researchers have shown that icebergs and meltwater from the North American ice sheet regularly reached South Carolina and even southern Florida during the last ice age, about 21,000 year ago. The researchers studied “iceberg scours,” the grooves and pits that were made by icebergs as they scraped the bottom of the seafloor along the southeastern U.S. Oceanographer Alan Condron of the University of Massachusetts Amherst said: Our study is the first to show that when the large ice sheet over North America known as the Laurentide ice sheet began to melt, icebergs calved into the sea around Hudson Bay and would have periodically drifted along the east coast of the United States as far south as Miami and the Bahamas in the Caribbean, a distance of more than 3,100 miles, about 5,000 kilometers. The researchers analyzed high-resolution images of the sea floor from Cape Hatteras to Florida and identified about 400 scour marks on the seabed that were formed by enormous icebergs plowing through mud on the sea floor. These characteristic grooves and pits were formed as icebergs moved into shallower water and their keels bumped and scraped along the ocean floor. Condron said: The depth of the scours tells us that icebergs drifting to southern Florida were at least 1,000 feet, or 300 meters thick. This is enormous. Such icebergs are only found off the coast of Greenland today. To investigate how icebergs might have drifted as far south as Florida, the scientists simulated the release of a series of glacial meltwater floods in a high-resolution ocean circulation model at four different levels for two locations, Hudson Bay and the Gulf of St. Lawrence. In order for icebergs to drift to Florida, our glacial ocean circulation model tells us that enormous volumes of meltwater, similar to a catastrophic glacial lake outburst flood, must have been discharging into the ocean from the Laurentide ice sheet, from either Hudson Bay or the Gulf of St. Lawrence. Condron’s work, conducted with Jenna Hill of Coastal Carolina University, is described in the current advance online issue of Nature Geosciences. Bottom line: A new study on ocean circulation suggests that icebergs and meltwater from the North American ice sheet would have regularly reached South Carolina and even southern Florida during the last ice age, about 21,000 year ago. Eleanor Imster has helped write and edit EarthSky since 1995. She was an integral part of the award-winning EarthSky radio series almost since it began until it ended in 2013. Today, as Lead Editor at EarthSky.org, she helps present the science and nature stories and photos you enjoy. She also serves as one of the voices of EarthSky on social media platforms including Facebook, Twitter and G+. She and her husband live in Tennessee and have two grown sons.
Crime Scene Documentation The goal of crime-scene documentation is to create a visual record that will allow the forensics lab and the prosecuting attorney to easily recreate an accurate view of the scene. The CSI uses digital and film cameras, different types of film, various lenses, flashes, filters, a tripod, a sketchpad, graph paper, pens and pencils, measuring tape, rulers and a notepad at this stage of the investigation. He may also use a camcorder and a camera boom. Scene documentation occurs during a second walk-through of the scene (following the same path as the initial walk-through). If there is more than one CSI on the scene (Mr. Clayton has been the sole CSI on a scene; he has also been one of dozens), one CSI will take photos, one will create sketches, one will take detailed notes and another might perform a video walk-through. If there is only one CSI, all of these jobs are his. Note-taking at a crime scene is not as straightforward as it may seem. A CSI's training includes the art of scientific observation. Whereas a layperson may see a large, brownish-red stain on the carpet, spreading outward from the corpse, and write down "blood spreading outward from underside of corpse," a CSI would write down "large, brownish-red fluid spreading outward from underside of corpse." This fluid might be blood; it might also be decomposition fluid, which resembles blood at a certain stage. Mr. Clayton explains that in crime scene investigation, opinions don't matter and assumptions are harmful. When describing a crime scene, a CSI makes factual observations without drawing any conclusions. CSIs take pictures of everything before touching or moving a single piece of evidence. The medical examiner will not touch the corpse until the CSI is done photographing it and the surrounding area. There are three types of photographs a CSI takes to document the crime scene: overviews, mid-views, and close-ups. Overview shots are the widest possible views of the entire scene. If the scene is indoors, this includes: - views of all rooms (not just the room where the crime seems to have occurred), with photos taken from each corner and, if a boom is present, overhead - views of the outside of the building where the crime happened, including photos of all entrances and exits - views of the building showing its relation to surrounding structures - photos of any spectators at the scene These last shots might identity a possible witness or even a suspect. Sometimes, criminals do actually return to the scene of the crime (this is particularly true in arson cases). Mid-range photos come next. These shots show key pieces of evidence in context, so the photo includes not only the evidence but also its location in a room and its distance from other pieces of evidence. Finally, the CSI takes close-ups of individual pieces of evidence, showing any serial numbers or other identifying characteristics. For these pictures, the CSI uses a tripod and professional lighting techniques to achieve the best possible detail and clarity -- these photos in particular will provide the forensics lab with views to assist in analyzing the evidence. The CSI also takes a second set of close-up shots that includes a ruler for scale. Every photo the CSI takes makes it into the photo log. This log documents the details of every photo, including the photograph number, a description of the object or scene in the photograph, the location of the object or scene, the time and date the photograph was taken and any other descriptive details that might be relevant. Without a good photo log, the pictures of the scene lose a lot of their value. In the investigation of John F. Kennedy's assassination, the FBI photographers who attended the autopsy didn't create descriptions of the pictures they were taking, and investigators were later unable to distinguish between entrance and exit wounds in the photos. In addition to creating a photographic record of the scene, CSIs also create sketches to depict both the entire scene, which is easier to do in a sketch than in a photograph because a sketch can span several rooms, and particular aspects of the scene that will benefit from exact measurements. The goal is to show locations of evidence and how each piece of evidence relates to rest of scene. The sketch artist may indicate details like the height of a door frame, the exact size of the room, the distance from the window to the door and the diameter of the hole in the wall above the victim's body. Scene documentation may also include a video walk-through, especially in major cases involving serial killers or multiple homicides. A video recording can offer a better feel for the layout of the crime scene -- how long it takes to get from one room to another and how many turns are involved, for instance. Also, once the investigation is further along, it may reveal something that was overlooked at the scene because the investigators didn't know to look for it. During a video walk-through, the CSI captures the entire crime scene and surrounding areas from every angle and provides a constant audio narrative. After the CSI has created a full record of the crime scene exactly as it was when he arrived, it's time to collect the evidence. Now he starts touching things.
What is a common scientific tool for forecasting weather? Answer Doppler radar Doppler radar is common tool used to detect precipitation in clouds and is used by nearly every meteorologist to determine the forecast. The machine emits a signal and listen for it to bounce back. Once it does, computers analyze the pulse's strength and calculate how far away the storm is.
Newswise — A new Boise State University study published today in the Proceedings of the National Academy of Sciences shows that the temperatures associated with the earth’s subduction zones have been historically miscalculated, which has major implications for our understanding of how the planet’s deadliest earthquakes and volcanic arcs are generated. “Our research shows there is a disparity between rocks examined today and our models of how they formed,” explained Matt Kohn, a Boise State University distinguished professor in the Department of Geosciences who specializes in understanding the temperature structure of subduction zones. Subduction zones are where one of Earth’s tectonic plates dives down beneath another. This creates a large proportion of Earth’s deadliest earthquakes and volcanoes. “With a few exceptions, most major earthquakes and volcanic eruptions are associated with a subduction zone,” Kohn explained. By reinterpreting heat loss from Earth’s surface, and creating new models of subduction zones, Kohn and Boise State graduate student Buchanan Kerswell; César R. Ranero, a research professor with the Spanish National Research Council; Adrian E. Castro and Frank S. Spear, researchers with the Department of Earth and Environmental Sciences at the Rensselaer Polytechnic Institute were able to prove that past, commonly referenced thermal-mechanical models of subduction zones are two times too cold. Rather, pressure-temperature conditions determined from exhumed metamorphic rocks are better indicators of subduction zone temperatures. Accurately inferring subduction zone temperatures is crucial for predicting how rocks deform and melt, which defines patterns of arc volcanoes and earthquakes. “There was a huge disparity between the metamorphic rocks we see today and the models that predicted how they were formed,” Kohn said. “We were trying to understand where these differences come from.” To create a new, more accurate model, the team compiled surface heat flow data from subduction zones worldwide and were able to show that the rate of heat loss is higher than can be explained for frictionless sliding, which is often assumed for modeling. Incorporating friction into models, as required by heat flow, reproduces the rock record. "One particular earthquake, one particular volcano – they don’t reflect the long-term behavior of subduction. Rather our work looks at the sum of all the earthquakes and sum of all volcanoes that might exist in a subduction zone over millions of years,” Kohn said. “This is like the Google Earth view of subduction zones.” The team’s research was funded by a $4 million collaborative grant through the National Science Foundation. MEDIA CONTACTRegister for reporter access to contact details Proceedings of the National Academy of Sciences
Question History: The problem Empire 1763-1776, no limited words – small paragraph 5-9 sentences 1. As British administrators sought to increase colonial revenues and tighten administrative control, what might have led them to pursue a less confrontational course with the colonies? What factors do you think are most important in explaining the failure of compromise? 2. What kinds of provocation caused colonists to riot or otherwise act directly, even violently, in defense of their interests? How did common law, Enlightenment, and republican ideas shape their thinking as they took action? 3.What compromises were proposed in the colonies as alternatives to independence? Why did Patriots reject them? 4.THEMATIC UNDERSTANDING Consider the events listed under “Work, Exchange, and Technology” and “Politics and Power” for the period 1763–1776 on the thematic timeline. How important were the linkages between economic developments and political ones in these years? (http://www.macmillanhighered.com/BrainHoney/Resource/6696/digital_first_content/trunk/test/henretta8e/asset/timeline/timeline103.html) support document 5.ACROSS TIME AND PLACE Chapter 4 presented a turbulent era, marked by social and cultural conflict and imperial warfare, during which the regions of British North America were disparate and without unity. Yet by 1776 — only thirteen years after the Treaty of Paris ending the Great War for Empire — thirteen of Britain’s mainland colonies were prepared to unite in a Declaration of Independence. What happened in that intervening time to strengthen and deepen colonists’ sense of common cause? As they drew together to resist imperial authority, what political and cultural resources did they have in common? 6.VISUAL EVIDENCE Return to the Paul Revere engraving of the Boston Massacre. This image was an instrument of political propaganda. What features of the image are most important to its political purpose? Consider his depiction of both the soldiers and the townspeople. Look, too, at the buildings surrounding the crowd, especially the Custom House on the right. List the ways in which Revere invokes the idea of tyranny in this image. (http://www.macmillanhighered.com/BrainHoney/Resource/6696/digital_first_content/trunk/test/henretta8e/asset/img_ch5/ch05_05UN08.html) support image 7.KEY TURNING POINTS: The Boston Tea Party (1773), the Coercive Acts (1774), and the first Continental Congress (1774). What did Parliament hope to achieve with the Coercive Acts? How did the decision to convene a continent-wide congress demonstrate the failure of Parliament’s efforts? Let’s block ads! (Why?)
Earth is often in the firing line of fragments of asteroids and comets, most of which burn up tens of kilometres above our heads. But occasionally, something larger gets through. That’s what happened off Russia’s east coast on December 18 last year. A giant explosion occurred above the Bering Sea when an asteroid some ten metres across detonated with an explosive energy ten times greater than the bomb dropped on Hiroshima. So why didn’t we see this asteroid coming? And why are we only hearing about its explosive arrival now? Nobody saw it Had the December explosion occurred near a city – as happened at Chelyabinsk in February 2013 – we would have heard all about it at the time. But because it happened in a remote part of the world, it went unremarked for more than three months, until details were unveiled at the 50th Lunar and Planetary Science Conference this week, based on NASA’s collection of fireball data.NASA/JPL-Caltech/Center for Near Earth Object Studies So where did this asteroid come from? At risk from space debris The Solar system is littered with material left over from the formation of the planets. Most of it is locked up in stable reservoirs – the Asteroid belt, the Edgeworth-Kuiper belt and the Oort cloud – far from Earth. Those reservoirs continually leak objects into interplanetary space, injecting fresh debris into orbits that cross those of the planets. The inner Solar system is awash with debris, ranging from tiny flecks of dust to comets and asteroids many kilometres in diameter. The vast majority of the debris that collides with Earth is utterly harmless, but our planet still bears the scars of collisions with much larger bodies. The largest, most devastating impacts (like that which helped to kill the dinosaurs 65 million years ago) are the rarest. But smaller, more frequent collisions also pose a marked risk. In 1908, in Tunguska, Siberia, a vast explosion levelled more than 2,000 square kilometres of forest. Due to the remote location, no deaths were recorded. Had the impact happened just two hours later, the city of St Petersburg could have been destroyed. In 2013, it was a 10,000-tonne asteroid that detonated above the Russian city of Chelyabinsk. More than 1,500 people were injured and around 7,000 buildings were damaged, but amazingly nobody was killed.Flickr/Alex Alishevskikh, CC BY-SA We’re still trying to work out how often events like this happen. Our information on the frequency of the larger impacts is pretty limited, so estimates can vary dramatically. Typically, people argue that Tunguska-sized impacts happen every few hundred years, but that’s just based on a sample of one event. The truth is, we don’t really know. What can we do about it? Over the past couple of decades, a concerted effort has been made to search for potentially hazardous objects that pose a threat before they hit Earth. The result is the identification of thousands of near-Earth asteroids upwards of a few metres across. Once found, the orbits of those objects can be determined, and their paths predicted into the future, to see whether an impact is possible or even likely. The longer we can observe a given object, the better that prediction becomes. But as we saw with Chelyabinsk in 2013, and again in December, we’re not there yet. While the catalogue of potentially hazardous objects continues to grow, many still remain undetected, waiting to catch us by surprise. If we discover a collision is pending in the coming days, we can work out where and when the collision will happen. That happened for the first time in 2008 when astronomers discovered the tiny asteroid 2008 TC3, 19 hours before it hit Earth’s atmosphere over northern Sudan. For impacts predicted with a longer lead time, it will be possible to work out whether the object is truly dangerous, or would merely produce a spectacular but harmless fireball (like 2008 TC3). For any objects that truly pose a threat, the race will be on to deflect them – to turn a hit into a miss. Searching the skies Before we can quantify the threat an object poses, we first need to know that the object is there. But finding asteroids is hard. Surveys scour the skies, looking for faint star-like points moving against the background stars. A bigger asteroid will reflect more sunlight, and therefore appear brighter in the sky - at a given distance from Earth. As a result, the smaller the object, the closer it must be to Earth before we can spot it. Objects the size of the Chelyabinsk and Bering Sea events (about 20 and 10 metres diameter, respectively) are tiny. They can only be spotted when passing very close to our planet. The vast majority of the time they are simply undetectable. As a result, having impacts like these come out of the blue is really the norm, rather than the exception! The Chelyabinsk impact is a great example. Moving on its orbit around the Sun, it approached us in the daylight sky - totally hidden in the Sun’s glare. For larger objects, which impact much less frequently but would do far more damage, it is fair to expect we would receive some warning. Why not move the asteroid? While we need to keep searching for threatening objects, there is another way we could protect ourselves.NASA's Goddard Space Flight Center From there, it is just a short hop to being able to deflect them – to change a potential collision into a near-miss. Interestingly, ideas of asteroid deflection dovetail nicely with the possibility of asteroid mining. The technology needed to extract material from an asteroid and send it back to Earth could equally be used to alter the orbit of that asteroid, moving it away from a potential collision with our planet. We’re not quite there yet, but for the first time in our history, we have the potential to truly control our own destiny. Authors: Jonti Horner, Professor (Astrophysics), University of Southern Queensland
In the early days, either a dead weight of fixed mass was dragged, or the step-on method was used, which people stood at fixed positions and stepped aboard as the sled passed. Today’s sleds use a complex system of gears to move weights up to 65,000 pounds. Upon starting, all the weights are over the sled’s rear axles to give an effective weight of the sled plus zero. As the tractor travels the course, the weights are pushed ahead of the sled’s axles, pushing the front of the sled into the ground, synthetically creating a gain in weight until the tractor is no longer able to overcome the force of friction. The sled can be adjusted in many ways to create a desired pull. Weight can be added or removed from the box. Adding weight on the pan can give more starting weight to the pan of the sled. The box gearing can be changed to move faster or slower, and the starting position of the box can be moved among a two feet area, affecting the distance of travel. The final adjustment is the placement of the trip, which applies the push down system to expend the full weight of the sled on to the pulling vehicle. Box- Contains the weight used to stop the vehicle and moves up the length of the sled rails progressively during the pull, driven off the front set of sled wheels.
How to Get Values out of Vectors in R Vectors would be pretty impractical if you couldn’t look up and manipulate individual values. You can perform these tasks easily by using R’s advanced, powerful indexing system. How R does indexing Every time R shows you a vector, it displays a number such as in front of the output. In this example, tells you where the first position in your vector is. This number is called the index of that value. If you make a longer vector — say, with the numbers from 1 to 30 — you see more indices. Consider this example: > numbers <- 30:1 > numbers 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Here, you see that R counts 13 as the 18th value in the vector. At the beginning of every line, R tells you the index of the first value in that line. If you try this example on your computer, you may see a different index at the beginning of the line, depending on the width of your console. How to extract values from a vector in R Those brackets () illustrate another strong point of R. They represent a function that you can use to extract a value from that vector. You can get the fifth value of the preceding number vector like this: > numbers 26 Okay, this example isn’t too impressive, but the bracket function takes vectors as arguments. If you want to select more than one number, you can simply provide a vector of indices as an argument inside the brackets, like so: > numbers[c(5,11,3)] 26 20 28 R returns a vector with the numbers in the order you asked for. So, you can use the indices to order the values the way you want. You also can store the indices you want to retrieve in another vector and give that vector as an argument, as in the following example: > indices <- c(5,11,3) > numbers[indices] 26 20 28 You can use indices to drop values from a vector as well. If you want all the numbers except for the third value, you can do that with the following code: > numbers[-3] 30 29 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Here, too, you can use a complete vector of indices. If you want to expel the first 20 numbers, use this code: > numbers[-(1:20)] 10 9 8 7 6 5 4 3 2 1 Be careful to add parentheses around the sequence. If you don’t, R will interpret that as meaning the sequence from –1 to 20, which isn’t what you want here. If you try that code, you get the following error message: > numbers[-1:20] Error in numbers[-1:20] : only 0's may be mixed with negative subscripts This message makes you wonder what the index 0 is. Well, it’s literally nothing. If it’s the only value in the index vector, you get an empty, or zero-length, vector back, whatever sign you give it; otherwise, it won’t have any effect. You can’t mix positive and negative index values, so either select a number of values or drop them. You can do a lot more with indices — they help you write concise and fast code.
Laura Nielsen for Frontier Scientists – The year 2013 was the fourth warmest year on record, according to the National Oceanic and Atmospheric Administration’s National Climate Data Center. 2013 tied with 2003 in NOAA’s record, which details global average temperatures all the way back to the year 1880. “Including 2013, 9 of the 10 warmest years in the 134-year period of record have occurred in the 21st century. Only one year during the 20th century—1998—was warmer than 2013.” The combined land and ocean surface temperature average might be a little surprising to many on the eastern side of the contiguous 48 States’ midline, where folks have been shivering through a cold January. It’s probably less surprising to Alaskans; there, the January temperature was nearly 15°F above normal. Temperatures in the United States have seemed odd, yet no state set a monthly record for January cold. Though it’s been especially cold in New England, this winter’s cold temperatures are fairly unremarkable compared to winters from past decades. Though it can be hard to remember, the United States contains only 2% of the surface of the globe. In much of the world, above-average annual temperatures prevailed. Weather | Climate Rick Thoman, climate scientist and service manager for the National Weather Service, Alaska region, explained that “On the climate scale, which is defined to be several decades long,” something like a two month cold snap “Is just one data point. When we talk about climate change we are talking about those statistics over several decades’ change. So individual seasons like this don’t really tell us much about climate change.” Weather is always variable, and there are “Many ways the atmosphere can be,” – it’s a riotous place. “One day, one cold snap, one snow storm in May tells us nothing about climate.” You need long-term data. Thoman noted: “Most of an adult human lifetime is climate.” So, “You have to look at all of the seasons over those several decades to be able to made statements about climate.” Examining the data, we can see an upward linear trend: Earth continues to face rising temperatures. Variable weather continues to draw fluctuating peaks and valleys on the average temperatures graph, but overall the rise in global temperatures is apparent. At present, there is more carbon dioxide in Earth’s atmosphere than there has been at any time during the last 800,000 years. And levels are rising. Actions like burning fossil fuels pump more carbon dioxide into the atmosphere. There, CO2 joins other greenhouse gasses in trapping heat within Earth’s atmosphere. If you encounter someone who insists that global warming stopped in 1998, point them to global temperature averages. Yes, 1998 was a very warm year. Still, it is one point among many showing a linear increase in temperatures – our planet is growing warmer. 1998 was a particularly warm year because it coincided with a particularly strong El Niño. 2013 was also a warm year, and it did not coincide with the increased temperatures that go hand-in-hand with El Niño. El Niño is a key driver of hotter years. During El Niño events, ocean temperatures in the central and eastern Pacific Ocean near the equator are unusually high. Warm ocean waters in the Eastern tropical Pacific tend to encourage warm air temperatures and can impact weather across the globe. Normally, the Equatorial Pacific maintains cold ocean temperatures in the East off of South America’s western coast. Trade Winds blow toward the West, pushing warm surface waters toward Indonesia. Because of how water cycles, cold water from the deep ocean comes up to the surface in the East near South America. Rain tends to occur over warm water, making it rainy in the West, cool and dry in the East. During El Niño, Trade Winds weaken, and fail to push normal amounts of warm ocean water toward the West, and instead the warm ocean water lingers. Less rain reaches the Western Pacific and Indonesia, while more rain can be found in the Eastern Pacific – potentially leading to devastating flood or drought. The general cycle is disrupted when El Niño occurs. NOAA’s explainer page records: “The eastward displacement of the atmospheric heat source overlaying the warmest water results in large changes in the global atmospheric circulation, which in turn force changes in weather in regions far removed from the tropical Pacific.” Its opposite is La Niña, in which normal conditions not only prevail but are strengthened, and very strong Trade Winds push warm surface water West, which ultimately allows the ocean to store more heat and to cycle cold deep water up to the surface in the East. El Niño Southern Oscillation The effects of El Niño and La Niña temperature fluctuation trends aren’t confined to the water. Earth’s oceans and atmosphere interact; they are intrinsically related. The tropical Pacific Ocean in particular seems to have been blunting the effects of global warming by taking up excess heat from the warming atmosphere. Heat exchange between ocean and atmosphere can encourage unusually strong or weak Trade Winds, and can guide the formation of areas of high or low air pressure. Those atmospheric fluctuations that accompany El Niño highs, neutrals, and lows are referred to as the Southern Oscillation. Taken together El Niño Southern Oscillation (ENSO) plays a major role in governing our planet’s energy balance, because heat is a form of energy and ENSO transports heat between the atmosphere and ocean and back. “If you warm the planet – as we are – then you change the heat content of the upper ocean. That changes the sea surface temperatures and how the different ocean layers interact together,” explained Matt England, deputy director of the Climate Change Research Center at the University of New South Wales. He noted that ENSO “Changes atmospheric circulation profoundly.” Looking ahead, England believes that in the context of ongoing global warming and increased planetary heat, “Future El Niños will be phenomenally costly to society.” El Niño Southern Oscillation ought to have its own feature article (upcoming) but we can take a brief look. Climatologists describe ENSO as having three states – neutral, negative (El Niño) or positive (La Niña). If you examine the Pacific during El Niño events – negative ENSO states – there is higher than average air pressure to the west (near Indonesia) and below-average air pressure to the east (near South America’s western coast). ENSO impacts conditions near the equator most immediately, and then its effects are felt further abroad. “In 2013, we had a neutral ENSO, we are not far out of a solar minimum where the energy from the sun is low and we also know pollution from aerosols that cool the planet has been very high,” England elaborated. “With all these factors together, I would not have been surprised if 2013 was the fourth coolest year on record. But of course, we should not be surprised because of the extra greenhouse gases in the atmosphere.” How do we measure global temperature? In the simplest terms, Earth’s energy balance is governed by how much energy reaches Earth from our star, how much energy is stored here, and how much heat escapes from Earth and Earth’s atmosphere back into the cold reaches of space. It involves an incredibly complex interplay of factors, including but not limited to: the Sun’s current strength, the position of the planet in relation to the Sun, the albedo (reflective or absorptive qualities) of Earth’s surfaces, the intermixing of atmospheric layers, the ocean’s temperature, and the measure of heat-trapping gasses that are cached away or freely roaming the atmosphere. To measure Earth’s average temperature, data is taken from ground-based temperature measurements, from bouy-based sea surface temperatures, and – beginning in 1979 – from satellites that use microwaves to measure air temperatures in the troposphere high above our heads. England notes that “For a long time now climatologists have been tracking the global average air temperature as a measure of planetary climate variability and trends, even though this metric reflects just a tiny fraction of Earth’s net energy or heat content.” In reality “The globe – our planet – spans the oceans, atmosphere, land and ice systems in their entirety.” Measuring everything and everywhere is difficult, though. It is hard to establish weather stations in the most remote locations of the world, or in places like the hazardous and often icy Arctic ocean. When NOAA takes Earth’s temperature, it uses only hard numbers from temperature measurement sites. In contrast, NASA acts to fill in the data gaps by using sophisticated computer technology – climate models. Computational science lets climatologists find a best guess: an estimation or interpolation of what temperatures exist in remote places. Meanwhile, some studies have taken estimated temperatures and further refined them. For example, Cowtan & Way focused on improving interpolations of Arctic temperatures. Their analysis showed that 2013 was warmer than 1998, despite 2013 coinciding with a neutral ENSO while 1998 gained a significant leg up on warmer world temperatures because it was a record El Niño year. These different techniques for taking Earth’s average temperature result in different data sets. Comparing NOAA, NASA, and Cowtan & Way will show years are ranked slightly differently; however, their findings agree fairly closely and if you look past rank 5 you’ll soon see the other record warm years fall in line. The data sets face the unenviable challenge of differentiating between tiny fractions of degrees (overall, the global temperature has risen 1.4 °F [0.8 °C] since 1880). - 1) 2010 2) 2005 3) 1998 4) 2013 5) 2003 = NOAA NCDC - 1) 2010 2) 2005 3) 2007 4) 2002 5) 1998 = NASA GISS - 1) 2010 2) 2005 3) 2007 4) 2009 5) 2013 = Cowtan & Way Physicist and oceanographer Stefan Rahmstorf explains well on RealClimate: “The truly global average is important, since only it is directly related to the energy balance of our planet and thus the radiative forcing by greenhouse gases. An average over just part of the globe is not.” After all, ”The Arctic has been warming disproportionately in the last ten to fifteen years.” You can’t separate the Arctic from its hemisphere This year residents of Alaska (the United States’ Arctic) experienced above-normal January temperatures while their more southerly countrymen toward the East were treated to cold and snowstorms. Justin Gillis for the New York Times recorded: “The extremes in January were directly related, experts said, with the two regions falling on opposite sides of a big loop in the jet stream, a belt of high winds in the upper atmosphere that helps to regulate the climate. A dip of the jet stream into the Eastern United States allowed cold air to descend from the Arctic, while a corresponding ridge in the West allowed warm air to hover over California and to penetrate normally frigid regions to the north.” Looking back, though, the spring of 2013 saw a very cold spring in Alaska while much of the U.S. enjoyed balmy weather. The National Weather Service’s Rick Thoman is used to such discrepancies. “There is nothing about climate change or climate variability that says that the weather will become less variable.” He added: “In fact there is some evidence, as we move into a warmer climate that – for instance – some places will cool as high latitudes,” like the Arctic, warm. “There’s increasing evidence that places at mid-latitudes … for instance Northeast United States, may experience more frequent snow storms and colder weather.” Why? Cold air that normally remained over the Arctic can get displaced South. Thoman reminded us that “Climate change and climate variability are very complex and you can have sometimes what seem to be unintuitive results,” including “More snow storms at mid-latitudes.” How can such little numbers be tied to significant global change? NOAA reported the 2013 global average land surface temperature was 1.78°F [0.99°C] above the 20th century (1901 – 2000) average of 47.3°F [8.5°C]. NASA’s Goddard Institute for Space Studies (GISS) determined that the average global temperature has risen about 1.4 °F [0.8 °C] since 1880. Two-thirds of that warming has occurred since 1975. The numbers don’t seem like much, and yet they matter. NASA’s Goddard Institute for Space Studies: The global temperature mainly depends on how much energy the planet receives from the Sun and how much it radiates back into space—quantities that change very little. The amount of energy radiated by the Earth depends significantly on the chemical composition of the atmosphere, particularly the amount of heat-trapping greenhouse gases. A one-degree global change is significant because it takes a vast amount of heat to warm all the oceans, atmosphere, and land by that much. In the past, a one- to two-degree drop was all it took to plunge the Earth into the Little Ice Age. A five-degree drop was enough to bury a large part of North America under a towering mass of ice 20,000 years ago. Are you stuck in the cold shoveling, America? Or perhaps it’s hotter than you’d like? “The climate is changing, the climate is always changing,” Rick Thoman said. “On the global scale we know for certain that it is warmer than it was but on the weather scale the day in and out variability will remain with us forever.” Frontier Scientists: presenting scientific discovery in the Arctic and beyond - ‘Freezing January for Easterners Was Not Felt Round the World’ Justin Gillis, New York Times (Feb.2014) - ‘GISS Surface Temperature Analysis (GISTEMP)’ GISTEMP analysis website, NASA Goddard Space Flight Center (accessed Feb.2014) - ‘Global temperature 2013’ Stefan Rahmstorf, RealClimate (Jan.2014) - ‘Global Temperatures’ NASA Earth Observatory : World of Change (accessed Feb.2014) - ‘Global Temperatures Analysis – Annual 2013’ NOAA National Climatic Data Center, State of the Climate: Global Analysis for Annual 2013 (Dec.2013) - ‘Going with the wind’ Matthew England, guest post on RealClimate (Feb.2014) - ‘NASA Finds 2013 Sustained Long-Term Climate Warming Trend’ NASA Headquarters Press Release (Jan.2014) - ‘What if 2013 had been an El Niño year?’ Graham Readfearn, PlanetOz, hosted by The Guardian (Jan.2014) - ‘What is an El Niño?’ NOAA (accessed Feb.2014)
My kindergarteners have been looking closely at the artwork of Wassily Kandinsky as we learn about geometry in math. We are using his art to explore shapes, lines, color and important vocabulary for positional words that are part of our state standards. I’m using Kandinsky’s art just like I use mentor texts throughout my literacy workshop. It’s been very exciting to see how the children are learning math while “standing on the shoulders” (as Katie Wood Ray says) of this artist. His fascinating abstract art paintings engage my students and allow us to surround our math instruction with rich talk about a variety of geometric terms, as well as art terms. For example, mathematicians use the term “rhombus”, but artists use the term “diamond” to speak about the same shape. We are creating an ongoing shared writing text with what we are noticing. This writing that came from their talk looked like this: I see a red square next to a blue curved line. I see a big yellow curved line overlapping a black circle. I see 5 small circles under a pink rhombus. Last week we used the program Pixie in the computer lab to create our own Kandinsky inspired works of art. The students used a variety of shapes, colors and lines to create their own work of art. They talked about how they were choosing the placement of their shapes and carefully planned out their work. This week we are creating our very own “Kindergarten Kandinsky” wall mural as we use his work as our mentor text to create a piece of art showing our knowledge of shapes, colors and lines. Stay tuned for an upcoming post about this! I value the importance of visual texts, such as our Kandinsky pieces, as another form of literacy. Teaching children to read art, to create art from using artists as mentors, and to talk about art is a key piece of my literacy instruction. How do you use visual art in your teaching?
2007 Schools Wikipedia Selection. Related subjects: Environment Biofuel is any fuel that is derived from biomass — recently living organisms or their metabolic byproducts, such as manure from cows. It is a renewable energy source, unlike other natural resources such as petroleum, coal and nuclear fuels. One definition of biofuel is any fuel with an 80% minimum content by volume of materials derived from living organisms harvested within the ten years preceding its manufacture. Like coal and petroleum, biomass is a form of stored solar energy. The energy of the sun is "captured" through the process of photosynthesis in growing plants. (See also: Systems ecology) One advantage of biofuel in comparison to most other fuel types is it is biodegradable, and thus relatively harmless to the environment if spilled. 25 Years: The Down Fall of Petrochemical Fuels Intro Petroleum, fossil fuel, fact it is not an inexhaustible source. Current life depends on petroleum, and yet we use it faster than we can make it. To sustain current life, the usage of an alternative fuel, in replacement of petroleum, is critical. The alternative fuels are much healthier for nature and the community. Corn fuel and cooking oils are also inexhaustible. Why Alternate? The alternate fuels have some pros that are a deciding factor among many to switch to the alternate source. The petrochemicals cause many smog problems in cities and suburbs that surround it, therefore causing children and adults alike to have health issues. All alternative fuel sources are much healthier, such as corn-based fuel called gasohol. Also, unlike petrochemicals, the alternative fuel sources are unlimited. The use of alternative fuels such as corn and soybean fuel will increase the economy in the U.S. and decrease the amount of money we send to the Middle East for oil. Also, it "Costs MUCH less than . . . diesel." ("Quick Breakdown: Vegetable" par.11.) The alternative fuel will also decrease the smog problems, in places like California, New York City, and Chicago. What’s Available? Currently there are already alternative fuels on the market, though they are hard to locate. Vegetable oil is one good source of fuel; there is even a bus, which uses vegetable oil for fuel, which travels the country promoting the usage of alternative fuels. Also, vegetable oil is“Usually gathered from cooperating restaurants . . . ” (“Quick Breakdown: Vegetable” par. 12) Another fuel source is gasohol which can be made out of soybeans and corn. Also, cooking oil in general will work as fuel. The Impact. In 1970 the Clean Air Act was instated for the protection of the U.S. environment and society. “Vegetable oil and Biodiesel are virtually sulfur free . . . ” (“Quick Breakdown: Vegetable” par. 17) Switching to alternative fuels would support this act. The U.S. would also stop having to export money to the Middle East and start paying off our 8.66 trillion dollar debt. This would affect petroleum-based fuel companies. They will have to adapt for the betterment of the country. This would also hurt the Middle East’s economy. Though that is true, the use of corn and soybean fuels will spike the U.S.’s, specifically the mid west’s, economy upward. It would also promote farmers to work and it would increase their profit. Agricultural products specifically grown for use as biofuels include corn and soybeans, primarily in the United States; as well as flaxseed and rapeseed, primarily in Europe; sugar cane in Brazil and palm oil in South-East Asia. Biodegradable outputs from industry, agriculture, forestry, and households can also be used to produce bioenergy; examples include straw, timber, manure, rice husks, sewage, biodegradable waste and food leftovers. These feedstocks are converted into biogas through anaerobic digestion. Biomass used as fuel often consists of underutilized types, like chaff and animal waste. Much research is currently in progress into the utilization of microalgae as an energy source, with applications being developed for biodiesel, ethanol, methanol, methane, and even hydrogen. On the rise is use of hemp, although politics currently restrains this technology. Biofuel can be used both for central- and decentralized production of electricity and heat. As of 2005, bioenergy covers approximately 15% of the world's energy consumption . Most bioenergy is consumed in developing countries and is used for direct heating, as opposed to electricity production. The production of biofuels to replace oil and natural gas is in active development, focusing on the use of cheap organic matter (usually cellulose, agricultural and sewage waste) in the efficient production of liquid and gas biofuels which yield high net energy gain. The carbon in biofuels was recently extracted from atmospheric carbon dioxide by growing plants, so burning it does not result in a net increase of carbon dioxide in the Earth's atmosphere. As a result, biofuels are seen by many as a way to reduce the amount of carbon dioxide released into the atmosphere by using them to replace non-renewable sources of energy. Noticeable is the fact that the quality of timber or grassy biomass does not have a direct impact on its value as an energy-source. Dried compressed peat is also sometimes considered a biofuel. However, it does not meet the criteria of being a renewable form of energy, or of the carbon being recently absorbed from atmospheric carbon dioxide by growing plants. Though more recent than petroleum or coal, on the time scale of human industrialisation, peat is a fossil fuel and burning it does contribute to atmospheric CO2. Biofuel was used since the early days of the car industry. Nikolaus August Otto, the German inventor of the combustion engine, conceived his invention to run on ethanol. While Rudolf Diesel, the German inventor of the Diesel engine, conceived it to run on peanut oil. The Ford Model T, a car produced between 1903 and 1926 used ethanol. However, when crude oil began being cheaply extracted from deeper in the soil (thanks to drilling starting in the middle of the 19th century), cars began using fuels from oil. Nevertheless, before World War II, biofuels were seen as providing an alternative to imported oil in countries such as Germany, which sold a blend of gasoline with alcohol fermented from potatoes under the name Reichskraftsprit. In Britain, grain alcohol was blended with petrol by the Distillers Company Ltd under the name Discol and marketed through Esso's affiliate Cleveland. After the War cheap Middle Eastern Oil lessened interest in biofuels. Then with the oil shocks of 1973 and 1979, there was an increase in interests from governments and academics in biofuels. However, interest decreased with the counter-shock of 1986 that made oil prices cheaper again. But since about 2000 with rising oil prices, concerns over the potential oil peak, greenhouse gas emissions (Global Warming), and instability in the Middle East are pushing renewed interest in biofuels. Government officials have made statements and given aid in favour of biofuels. For example, U.S. president George Bush said in his 2006 State of Union speech, that he wants for the United States, by 2025, to replace 75% of the oil coming from the Middle East. Types of high volume industrial biomass on Earth Certain types of biomass have attracted research and industrial attention. Many of these are considered to be potentially useful for energy or for the production of bio-based products. Most of these are available in very large quantities and have low market value. Examples of biofuels Biologically produced alcohols Biologically produced alcohols, most commonly ethanol and methanol, and less commonly propanol and butanol are produced by the action of microbes and enzymes through fermentation — see alcohol fuel. - Methanol, which is currently produced from natural gas, can also be produced from biomass — although this is not economically viable at present. The methanol economy is an interesting alternative to the hydrogen economy. - Biomass to liquid, synthetic fuels produced from syngas. Syngas in turn, is produced from biomass by gasification. - Ethanol fuel produced from sugar cane is being used as automotive fuel in Brazil. Ethanol produced from corn is being used mostly as a gasoline additive (oxygenator) in the United States, but direct use as fuel is growing. Cellulosic ethanol is being manufactured from straw (an agricultural waste product) by Iogen Corporation of Ontario, Canada; and other companies are attempting to do the same. ETBE containing 47% Ethanol is currently the biggest biofuel contributor in Europe. - Butanol is formed by A.B.E. fermentation (Acetone, Butanol, Ethanol) and experimental modifications of the ABE process show potentially high net energy gains with butanol being the only liquid product. Butanol can be burned "straight" in existing gasoline engines (without modification to the engine or car), produces more energy and is less corrosive and less water soluble than ethanol, and can be distributed via existing infrastructures. - Mixed Alcohols (e.g., mixture of ethanol, propanol, butanol, pentanol, hexanol and heptanol, such as EcaleneTM), obtained either by biomass-to-liquid technology (namely gasification to produce syngas followed by catalytic synthesis) or by bioconversion of biomass to mixed alcohol fuels. - GTL or BTL both produce synthetic fuels out of biomass in the so called Fischer Tropsch process. The synthetic biofuel containing oxygen is used as additive in high quality diesel and petrol. Biologically produced gases Biogas is produced by the process of anaerobic digestion of organic material by anaerobes. Biogas can be produced either from biodegradable waste materials or by the use of energy crops fed into anaerobic digesters to supplement gas yields. The solid output, digestate, can also be used as a biofuel. Biogas contains methane and can be recovered in industrial anaerobic digesters and mechanical biological treatment systems. Landfill gas is a less clean form of biogas which is produced in landfills through naturally occurring anaerobic digestion. Paradoxically if this gas is allowed to escape into the atmosphere it is a potent greenhouse gas. Biologically produced gases from wastes Biologically produced oils and gases can be produced from various wastes: - Thermal depolymerization of waste can extract methane and other oils similar to petroleum. - Pyrolysis oil may be produced out of biomass, wood waste etc. using heat only in the flash pyrolysis process. The oil has to be treated before using in conventional fuel systems or internal combustion engines (water + pH). - One company, GreenFuel Technologies Corporation, has developed a patented bioreactor system that utilizes nontoxic photosynthetic algae to take in smokestacks flue gases and produce biofuels such as biodiesel, biogas and a dry fuel comparable to coal . Biologically produced oils Biologically produced oils can be used in diesel engines: - Straight vegetable oil (SVO). - Waste vegetable oil (WVO) - waste cooking oils and greases produced in quantity mostly by commercial kitchens - Biodiesel obtained from transesterification of animal fats and vegetable oil, directly usable in petroleum diesel engines. Applications of biofuels One widespread use of biofuels is in home cooking and heating. Typical fuels for this are wood, charcoal or dried dung. The biofuel may be burned on an open fireplace or in a special stove. The efficiency of this process may vary widely, from 10% for a well made fire (even less if the fire is not made carefully) up to 40% for a custom designed charcoal stove1. Inefficient use of fuel may be a minor cause of deforestation (though this is negligible compared to deliberate destruction to clear land for agricultural use) but more importantly it means that more work has to be put into gathering fuel, thus the quality of cooking stoves has a direct influence on the viability of biofuels. "American homeowners are turning to burning corn in special stoves to reduce their energy bills. Sales of corn-burning stoves have tripled this year [...] Corn-generated heat costs less than a fifth of the current rate for propane and about a third of electrical heat" . Direct electricity generation The methane in biogas is often pure enough to pass directly through gas engines to generate green energy. Anaerobic digesters or biogas powerplants convert this renewable energy source into electricity. This can either be used commercially or on a local scale. Use on farms In Germany small scale use of biofuel is still a domain of agricultural farms. It is an official aim of the German government to use the entire potential of 200,000 farms for the production of biofuel and bioenergy. (Source: VDI-Bericht "Bioenergie - Energieträger der Zukunft". Different combustion-engines are being produced for very low prices lately . They allow the private house-owner to utilize low amounts of "weak" compression of methane to generate electrical and thermal power (almost) sufficient for a well insulated residential home. Problems and solutions Unfortunately, much cooking with biofuels is done indoors, without efficient ventilation, and using fuels such as dung causes airborne pollution. This can be a serious health hazard; 1.5 million deaths were attributed to this cause by the World Health Organisation as of 2000 2. There are various responses to this, such as improved stoves, including those with inbuilt flues and switching to alternative fuel sources. Most of these responses have difficulties. One is that fuels are expensive and easily damaged. Another is that alternative fuels tend to be more expensive, but the people who rely on biofuels often do so precisely because they cannot afford alternatives. 3 Organisations such as Intermediate Technology Development Group work to make improved facilities for biofuel use and better alternatives accessible to those who cannot currently get them. This work is done through improving ventilation, switching to different uses of biomass such as the creation of biogas from solid biomatter, or switching to other alternatives such as micro-hydro power. Many environmentalists are concerned that first growth forest may be felled in countries such as Indonesia to make way for palm oil plantations, driven by rising demand for diesel in SE Asia and Europe. Direct biofuels are biofuels that can be used in existing unmodified petroleum engines. Because engine technology changes all the time, exactly what a direct biofuel is can be hard to define; a fuel that works without problem in one unmodified engine may not work in another engine. In general, newer engines are more sensitive to fuel than older engines, but new engines are also likely to be designed with some amount of biofuel in mind. Straight vegetable oil can be used in some (older) diesel engines. Only in the warmest climates can it be used without engine modifications, so it is of limited use in colder climates. Most commonly it is turned into biodiesel. No engine manufacturer explicitly allows any use of vegetable oil in their engines. Biodiesel can be a direct biofuel. However, no current manufacturer covers their engine under warranty for 100% biodiesel (some have allowed 100% in the past, and it appears that changes in emission standards are the only reason they don't today, but no official statement exists). Many people have run thousands of miles on biodiesel without problem, and many studies have been made on 100% biodiesel. Butanol is often claimed as a direct replacement for gasoline. It is not in wide spread production at this time, and engine manufacturers have not made statements about its use. While on paper (and a few lab tests) it appears that butanol has sufficiently similar characteristics with gasoline such that it should work without problem in any gasoline engine, no widespread experience exists. Ethanol is the most common biofuel, and over the years many engines have been designed to run on it. Many of these could not run on regular gasoline. It is open to debate if ethanol is a direct replacement in these engines though - they cannot run on anything else. In the late 1990's engines started appearing that by design can use either fuel. Ethanol is a direct replacement in these engines, but it is debatable if these engines are unmodified, or factory modified for ethanol. Small amounts of biofuel are often blended with traditional fuels. The biofuel portion of these fuels is a direct replacement for the fuel they offset, but the total offset is small. For biodiesel, 5% or 20% are commonly approved by various engine manufacturers. See Common ethanol fuel mixtures for information on ethanol. On the other hand, recognizing the importance of bioenergy and its implementation, there are international organizations such as IEA Bioenergy, established in 1978 by the International Energy Agency (IEA), with the aim of improving cooperation and information exchange between countries that have national programs in bioenergy research, development and deployment. European Union has set a goal for 2008 that each member state should achieve at least 5.25% biofuel usage of all used traffic fuel. By 2006 it looks like most of the members states will not meet this goal.
Pair “story of an hour” with “a jury of her peers” and ask students to susan glaspell students can practice an in-depth analysis on structure and. 'a jury of her peers' will be the point of reference from which you can seek a summary and analysis of susan glaspell's 'a jury of her character analysis. A man has been murdered, and it seems his wife is to blame in susan glaspell's 'a jury of her peers' in this lesson, we'll summarize the plot before looking at how glaspell uses setting and. A jury of her peers author: susan glaspell: of early feminist literature because two female characters are able to solve a mystery that the male characters. Answer the questions we've provided on this short quiz to see how well you understand susan glaspell's ''a jury of her peers'' when you complete. Susan glaspell a jury of her peers the full study guide for a jury of her peers access full summary prev full summary next major character analysis. Character analysis: “a jury of her peers and susan glaspell's a jury of her peers this 3 page research paper offers analysis of an article. Susan glaspell a jury of her peers this exploration of literature and literary analysis the character of minnie wright in a jury of her peers. Characters in trifles by susan glaspell a jury of her peers susan glaspell popular topics in the play trifles,'' there are four characters. An analysis of susan glaspell's trifles an analysis of a jury of her peers by susan glaspell character analysis of john and minnie wright in trifles by. A critical overview of a jury of her peers by susan glaspell, including historical reactions to the work and the author character analysis, themes. Susan glaspell a jury of her peers essay character names are very important in a jury of her peers the two characters jury trial analysis susan glaspell's. Gender analysis of the short story by susan glaspell a jury of her peers analysis both of these characters undergo a feminine metamorphosis to arrive at. Need help with a jury of her peers in susan glaspell's a jury of her peers check out our revolutionary side-by-side summary and analysis. Get answers to your a jury of her peers a jury of her peers who is the main character in a what are the motifs in a jury of her peers by susan glaspell. A jury of her peers, a short story by susan glaspell. In a jury of her peers by susan glaspell we have the theme of connection high and fine literature is wine, and mine is only water but everybody likes water. Literary criticism details of the main characters writing style of glaspell the short story a jury of her peers, by susan glaspell is presented. Ok so i have this essay to write an essay on characters from a jury of her peers by susan glaspells i read and understand the story, but i am having a little trouble with the characters. Literary and character analysis - susan glaspell's a jury of her peers. The a jury of her peers community note includes chapter-by-chapter summary and analysis, character contains a biography of susan glaspell, literature. Read expert analysis on literary devices in a jury of her peers. Literary analysis of trifles by susan glaspell which i immediately picked out as the literary devices i also included character descriptions and notes. The “trifles” and “a jury of her peers” by susan glaspell are very similar in the way that they both have got the same basic plot however , one of them that is the trifles is a play and the. Need help on characters in susan glaspell's a jury of her peers check out our detailed character descriptions from the creators of sparknotes. Read expert analysis on a jury of her peers including character analysis, facts, foreshadowing, historical context, and imagery at owl eyes.
Are you ready to join rocket scientists and engineers to design the next rocket mission? This Summer, we’ll be lifting off to outer space! You’ll have the opportunity to build and launch your own rocket while learning about all the systems and components of designing a rocket launch. Working in a highly diverse team, students will be tasked with the design of the control systems for the mission, the design and engineering of the rocket, and overseeing the subsystems and instruments of the rocket. Let’s get ready to blast off! In this Studio, students will design a comprehensive rocket mission that will take their rockets high into the sky above. Each student will assume the role of a designer-engineer-rocket scientist focused on the R&D, design, and engineering of all of the subsystems of the mission including navigation, control, power, propulsion, structures, payloads, ground segment and launch vehicle. Together, students will experience the iterative and exploratory process of engineering a complex rocket mission. We will start with basic lessons on the math and physics that govern the design and flight of rockets, and then focus on aerodynamics (forces that are generated and act on a rocket as it flies through the air) and thermodynamics (energy and work of system). We will end with a mission that is out of this world! Physics (Electricity, Magnetism) Sensors & Actuators Digital Fabrication (Laser-cutting, 3d Printing) - Enrolling students must be between the ages of 11 to 18 (middle and high school students)
Psychology A Level What will I learn? Paper one – Introductory Topics This approach will include types of conformity, explanations for obedience, explanations of resistance to social influence and the role of social influence on social change. Learners will study research by famous social psychologists including Asch, Milgram and Zimbardo and will gain a detailed understanding of how behaviour can be influenced by key social factors. This is part of Cognitive psychology. Cognitive psychologists are concerned with the internal processes of the mind of which memory is one. Learners will assess different models and explanations for memory. They will have the opportunity to take part in numerous memory experiments and tests in order to develop understanding of types and functions of memory. This will include explanations for forgetting and factors affecting the accuracy of eye witness testimony. Developmental psychology focuses on the behaviour of infants from birth through to adulthood. This includes care giver and infant interactions, and explanations of attachment including learning theory and critical period for attachments to be formed. Learners will have the opportunity to study key case studies of deprivation and the effects of institutionalisation. Paper two – Psychology in Context Key Approaches and explanations of psychology will be explored. These include Learning, Cognitive and Biological. Topics include the division of the nervous system. The structure and function of sensory, relay and motor neurons and the process of synaptic transmission. Learners will also study the function of the endocrine system and the fight or flight response. This includes definitions of abnormality. Mental Health issues including phobias, depression and OCD, explained from different psychological perspectives. Students will learn scientific process and techniques of data handling and analysis. They will be able to demonstrate knowledge and understanding of experimental methods, observational techniques, self- report techniques and correlations. In year 2 Learners will have the opportunity to develop knowledge and understanding of Issues and Options in psychology. These include: - Issues and debates, for example the nature/nurture debate - The role of gender, including gender development - Schizophrenia, including classification, diagnosis and treatment - Forensic psychology including defining crime, explaining criminal behaviour and dealing with offending behaviour GCSE Grade 5 or above in Maths and English, and grade 5 or above in a Science preferred. A GCSE points average of 5.0. How long will it take to qualify? How will I be assessed? Assessment will be through 3 unseen written exam papers sat at the end of the course. Additional Learning Opportunities There are opportunities to attend lectures and workshops at Universities. Useful activities include the Psychology Society in which you can explore the subject outside your AS/A Level studies or related subjects such as Counselling, School Experience, Sign Language or Preparing for Medicine. Who is this course aimed at? Psychology is suited to students who: - like science and are open-minded and interested in the world around them - have an analytic mind and want to understand the reasons for behaviour using a scientific approach - aim to work in practically any career where other people are involved - are prepared to look beneath the surface of any claim/debate - are willing to read ahead in preparation for class activities and work to weekly deadlines Many students aspire for careers in teaching, social work, counselling, the police force, personnel or medicine but this qualification teaches you research, analytical, communication and logical thinking skills which are transferable to a wide variety of courses and jobs. - A Level - 2 Years
The orientation of an elliptical orbit can be specified by three orbital elements: the inclination, the ascending node and the argument of perihelion. Suppose we have an elliptical orbit with eccentricity, e and semi-major axis, a. The perihelion is the point of closest approach between the orbiting body (e.g. a planet) and the focus (e.g. the Sun lies at one focus of a planetary orbit, the other focus is empty). |left: An elliptical orbit with semi-major axis (a) and semi-minor axis (b). right top: A view of the orbit looking down the z-axis. The orbit has been rotated by an angle ω (the argument of perihelion) about the z-axis. right bottom: The same orbit viewed along the y-axis. The rotation by an angle ω has kept the orbit in the x-y plane.| The argument of perihelion is also defined as the angle between the ascending node (Ω) and the perihelion of the orbit. A related quantity is the longitude of perihelion, ϖ, although the distinction between these two quantities is often blurred. The longitude of perihelion is defined as: ϖ = ω + Ω For elliptical orbits around other celestial bodies, the argument of perihelion would be replaced by the argument of periastron (orbits around stars), argument of perigee (orbits around the Earth) or argument of periapsis (orbit around anything else!).
Expressive Language Disorder A person with an expressive language disorder (as opposed to a mixed receptive/expressive language disorder) understands language better than he/she is able to communicate. In speech-language therapy terms, the person’s receptive language (understanding of language) is better than his/her expressive language (use of language). This type of language disorder is often a component in developmental language delay (see section on this disorder). Expressive language disorders can also be acquired (occurring as a result of brain damage/injury), as in aphasia (see section on aphasia). The developmental type is more common in children, whereas the acquired type is more common in the elderly. An expressive language disorder could occur in a child of normal intelligence, or it could be a component of a condition affecting mental functioning more broadly (i.e. mental retardation, autism). Children with expressive language delays often do not talk much or often, although they generally understand language addressed to them. For example, a 2 year old may be able to follow 2-step commands, but he/she cannot name body parts. A 4 year old may understand stories read to him/her, but he/she may not be able to describe the story even in a simple narrative. Imaginative play and social uses of language (i.e. manners, conversation) may also be impaired by expressive language limitations, causing difficulty in playing with peers. These are children who may have a lot to say, but are unable to retrieve the words they need. Some children may have no problem in simple expression, but have difficulties retrieving and organizing words and sentences when expressing more complicated thoughts and ideas. This may occur when they are trying to describe, define, or explain information or retell an event or activity. In school-aged children, expressive language difficulties may be evident in writing as well. These children may have difficulties with spelling, using words correctly, composing sentences, performing written composition, etc. They may express frustration because they recognize that they cannot express the idea they wish to communicate. These children may become withdrawn socially because they cannot use language to relate to peers. mentioned in the section on developmental language disorders, these children may act out in school, or in later school years and reject learning completely without help. Also, as mentioned in the section on developmental language disorders, expressive disorders do not disappear with time. A speech-language pathologist can best diagnose an expressive language disorder. Parents and classroom teachers are in key positions to help in the evaluation as well as the planning and implementation of treatment. Other professionals involved in assessment and treatment, especially as related to academics, include educational therapists, resource specialists, and tutors.
The test form of an argument is what results from replacing different words, or sentences, that make up the argument with letters; the letters are called variables Some examples of valid arguments forms are modus ponens, modus tollens, and disjunctive syllogism. One invalid argument form is affirming the consequent. Just as variables can stand for various numbers in mathematics, variables can stand for various words, or sentences, in logic. Argument forms are very important in the study of logic. The parts of argument forms--sentence forms (see below)--are equally important. In a logic course one would learn how to determine what the forms of various sentences and arguments are. The basic notion of argument form can be introduced with an example. Here is an example of an argument: A All humans are mortal. Socrates is human. Therefore, Socrates is mortal. We can rewrite argument A by putting each sentence on its own line: - All humans are mortal. - Socrates is human. - Therefore, Socrates is mortal. To demonstrate the important notion of the form of an argument, substitute letters for similar items throughout B - All S is P. - a is S. - Therefore, a is P. All we have done in C is to put 'S' for 'human' and 'humans', 'P' for 'mortal', and a for 'Socrates'; what results, C , is the form of the original argument in A . So argument form C is the form of argument A . Moreover, each individual sentence of C is the sentence form of its respective sentence in A There is a good reason why attention to argument and sentence forms is important. The reason is this: form is what makes an argument valid or cogent.
Cholera – a waterborne disease – is closely linked to poor environmental conditions. The absence or shortage of safe water and of proper sanitation, as well as poor waste management, are the main causes of spread of the disease. These factors conducive to epidemics concur in many places in the developing world, and even more acutely in overcrowded settings, where cholera is either endemic or a recurrent problem. Typical at-risk areas are peri-urban slums, with precarious basic infrastructures, as well as internally displaced or refugee camps, where minimum requirements of clean water and sanitation are not met. However, inhabitants of rural areas, particularly along rivers and lake shores, are not spared. Populations most affected are the ones living in insalubrious conditions, where environmental safety is not ensured.
Colonial Revival Movement The Colonial Revival movement was a national expression of early North American culture, primarily the built and artistic environments of the east coast colonies. The Colonial Revival is generally associated with the eighteenth-century provincial fashion for the Georgian and Neoclassical styles. The movement inspired a variety of expressions to fulfill symbolic and functional needs during times of great change. The Colonial Revival was motivated by a range of historical events, particularly a rapidly growing industrial way of life and increasing immigration. Beyond its association with the development of a national historic consciousness that began in the 1870s, the Colonial Revival style in architecture, decorative arts, landscape and garden design, and American art has served to promote notions of democracy, patriotism, good taste, and moral superiority. Although its popularity continues to exist, particularly in architecture and decorative arts, the movement reached its peak between 1880 and 1940. - William B. Rhoads. The Colonial Revival. New York: Garland Pub., 1977. - Allan Axelrod, ed. The Colonial Revival in America. New York: Norton, 1985.
What is Python zlib The Python zlib library provides a Python interface to the zlib C library, which is a higher-level abstraction for the DEFLATE lossless compression algorithm. The data format used by the library is specified in the RFC 1950 to 1952, which is available at http://www.ietf.org/rfc/rfc1950.txt. The zlib compression format is free to use, and is not covered by any patent, so you can safely use it in commercial products as well. It is a lossless compression format (which means you don't lose any data between compression and decompression), and has the advantage of being portable across different platforms. Another important benefit of this compression mechanism is that it doesn't expand the data. The main use of the zlib library is in applications that require compression and decompression of arbitrary data, whether it be a string, structured in-memory content, or files. The most important functionalities included in this library are compression and decompression. Compression and decompression can both be done as a one-off operations, or by splitting the data into chunks like you'd seem from a stream of data. Both modes of operation are explained in this article. One of the best things, in my opinion, about the zlib library is that it is compatible with the gzip file format/tool (which is also based on DEFLATE), which is one of the most widely used compression applications on Unix systems. Compressing a String of Data The zlib library provides us with the compress function, which can be used to compress a string of data. The syntax of this function is very simple, taking only two arguments: Here the argument data contains the bytes to be compressed, and level is an integer value that can take the values -1 or 0 to 9. This parameter determines the level of compression, where level 1 is the fastest and yields the lowest level of compression. Level 9 is the slowest, yet it yields the highest level of compression. The value -1 represents the default, which is level 6. The default value has a balance between speed and compression. Level 0 yields no compression. An example of using the compress method on a simple string is shown below: import zlib import binascii data = 'Hello world' compressed_data = zlib.compress(data, 2) print('Original data: ' + data) print('Compressed data: ' + binascii.hexlify(compressed_data)) And the result is as follows: $ python compress_str.py Original data: Hello world Compressed data: 785ef348cdc9c95728cf2fca49010018ab043d If we change the level to 0 (no compression), then line 5 becomes: compressed_data = zlib.compress(data, 0) And the new result is: $ python compress_str.py Original data: Hello world Compressed data: 7801010b00f4ff48656c6c6f20776f726c6418ab043d You may notice a few differences comparing the outputs when using 2 for the compression level. Using a level of 2 we get a string (formatted in hexadecimal) of length 38, whereas with a level of 0 we get a hex string with length 44. This difference in length is due to the lack of compression in using level If you don't format the string as hexadecimal, as I've done in this example, and view the output data you'll probably notice that the input string is still readable even after being "compressed", although it has a few extra formatting characters around it. Compressing Large Data Streams Large data streams can be managed with the compressobj() function, which returns a compression object. The syntax is as follows: compressobj(level=-1, method=DEFLATED, wbits=15, memLevel=8, strategy=Z_DEFAULT_STRATEGY[, zdict]) The main difference between the arguments of this function and the compress() function is (aside from the data parameter) the wbits argument, which controls the window size, and whether or not the header and trailer are included in the output. The possible values for |Value||Window size logarithm||Output| |+9 to +15||Base 2||Includes zlib header and trailer| |-9 to -15||Absolute value of wbits||No header and trailer| |+25 to +31||Low 4 bits of the value||Includes gzip header and trailing checksum| method argument represents the compression algorithm used. Currently the only possible value is DEFLATED, which is the only method defined in the RFC 1950. The strategy argument relates to compression tuning. Unless you really know what you're doing I'd recommend to not use it and just use the default value. The following code shows how to use the import zlib import binascii data = 'Hello world' compress = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION, zlib.DEFLATED, -15) compressed_data = compress.compress(data) compressed_data += compress.flush() print('Original: ' + data) print('Compressed data: ' + binascii.hexlify(compressed_data)) After running this code, the result is: $ python compress_obj.py Original: Hello world Compressed data: f348cdc9c95728cf2fca490100 As we can see from the figure above, the phrase "Hello world" has been compressed. Typically this method is used for compressing data streams that won't fit into memory at once. Although this example does not have a very large stream of data, it serves the purpose of showing the mechanics of the You may also be able to see how it would be useful in a larger application in which you can configure the compression and then pass around the compression object to other methods/modules. This can then be used to compress chunks of data in series. You may also be able to see how it would be useful in a scenario where you have a data stream to compress. Instead of having to accumulate all of the data in memory, you can just call compress.flush() on your data chunk and then move on to the next chunk while leaving the previous one to be cleaned up by garbage collection. Compressing a File We can also use the compress() function to compress the data in a file. The syntax is the same as in the first example. In the example below we will compress a PNG image file named "logo.png" (which, I should note, is already a compressed version of the original raw image). The example code is as follows: import zlib original_data = open('logo.png', 'rb').read() compressed_data = zlib.compress(original_data, zlib.Z_BEST_COMPRESSION) compress_ratio = (float(len(original_data)) - float(len(compressed_data))) / float(len(original_data)) print('Compressed: %d%%' % (100.0 * compress_ratio)) In the above code, the zlib.compress(...) line uses the constant Z_BEST_COMPRESSION, which, as the name suggests, gives us the best compression level this algorithm has to offer. The next line then calculates the level of compression based on the ratio of length of compressed data over length of original data. The result is as follows: $ python compress_file.py Compressed: 13% As we can see, the file was compressed by 13%. The only difference between this example and our first one is the source of the data. However, I think it is important to show so you can get an idea of what kind of data can be compressed, whether it be just an ASCII string or binary image data. Simply read in your data from the file like you normally would and call the Saving Compressed Data to a File The compressed data can also be saved to a file for later use. The example below shows how to save some compressed text into a file: import zlib my_data = 'Hello world' compressed_data = zlib.compress(my_data, 2) f = open('outfile.txt', 'w') f.write(compressed_data) f.close() The above example compresses our simple "Hello world" string and saves the compressed data into a file named "outfile.txt". The "outfile.txt" file, when opened with our text editor, looks as follows: Decompressing a String of Data A compressed string of data can be easily decompressed by using the decompress() function. The syntax is as follows: decompress(data, wbits=MAX_WBITS, bufsize=DEF_BUF_SIZE) This function decompresses the bytes in the data argument. The wbits argument can be used to manage the size of the history buffer. The default value matches the largest window size. It also asks for the inclusion of the header and trailer of the compressed file. The possible values are: |Value||Window size logarithm||Input| |+8 to +15||Base 2||Includes zlib header and trailer| |-8 to -15||Absolute value of wbits||Raw stream with no header and trailer| |+24 to +31 = 16 + (8 to 15)||Low 4 bits of the value||Includes gzip header and trailer| |+40 to +47 = 32 + (8 to 15)||Low 4 bits of the value||zlib or gzip format| The initial value of the buffer size is indicated in the bufsize argument. However, the important aspect about this parameter is that it doesn't need to be exact, because if extra buffer size is needed, it will automatically be increased. The following example shows how to decompress the string of data compressed in our previous example: import zlib data = 'Hello world' compressed_data = zlib.compress(data, 2) decompressed_data = zlib.decompress(compressed_data) print('Decompressed data: ' + decompressed_data) The result is as follows: $ python decompress_str.py Decompressed data: Hello world Decompressing Large Data Streams Decompressing big data streams may require memory management due to the size or source of your data. It's possible that you may not be able to use all of the available memory for this task (or you don't have enough memory), so the decompressobj() method allows you to divide up a stream of data in to several chunks which you can decompress separately. The syntax of the decompressobj() function is as follows: This function returns a decompression object, which what you use to decompress the individual data. The wbits argument has the same characteristics as in decompress() function previously explained. The following code shows how to decompress a big stream of data that is stored in a file. Firstly, the program creates a file named "outfile.txt", which contains the compressed data. Note that the data is compressed using a value of wbits equal to +15. This ensures the creation of a header and a trailer in the data. The file is then decompressed using chunks of data. Again, in this example the file doesn't contain a massive amount of data, but nevertheless, it serves the purpose of explaining the buffer concept. The code is as follows: import zlib data = 'Hello world' compress = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION, zlib.DEFLATED, +15) compressed_data = compress.compress(data) compressed_data += compress.flush() print('Original: ' + data) print('Compressed data: ' + compressed_data) f = open('compressed.dat', 'w') f.write(compressed_data) f.close() CHUNKSIZE = 1024 data2 = zlib.decompressobj() my_file = open('compressed.dat', 'rb') buf = my_file.read(CHUNKSIZE) # Decompress stream chunks while buf: decompressed_data = data2.decompress(buf) buf = my_file.read(CHUNKSIZE) decompressed_data += data2.flush() print('Decompressed data: ' + decompressed_data) my_file.close() After running the above code, we obtain the following results: $ python decompress_data.py Original: Hello world Compressed data: x??H???W(?/?I?= Decompressed data: Hello world Decompressing Data from a File The compressed data contained in a file can be easily decompressed, as you've seen in previous examples. This example is very similar to the previous one in that we're decompressing data that originates from a file, except that in this case we're going back to using the one-off decompress method, which decompresses the data in a single method call. This is useful for when your data is small enough to easily fit in memory. This can be seen from the following example: import zlib compressed_data = open('compressed.dat', 'rb').read() decompressed_data = zlib.decompress(compressed_data) print(decompressed_data) The above program opens the file "compressed.dat" created in a previous example, which contains the compressed "Hello world" string. In this example, once the compressed data is retrieved and stored in the variable compressed_data, the program decompresses the stream and shows the result on the screen. As the file contains a small amount of data, the example uses the decompress() function. However, as the previous example shows, we could also decompress the data using the After running the program we get the following result: $ python decompress_file.py Hello world The Python library zlib provides us with a useful set of functions for file compression using the zlib format. The functions decompress() are normally used. However, when there are memory constraints, the functions decompressobj() are available to provide more flexibility by supporting compression/decompression of data streams. These functions help split the data into smaller and more manageable chunks, which can be compressed or decompressed using the decompress() functions respectively. Keep in mind that the zlib library also has quite a few more features than what we were able to cover in this article. For example you can use zlib to compute the checksum of some data to verify its integrity when decompressed. For more information on additional features like this, check out the official documentation.
Green Teacher 78, Spring 2006 A Permaculture School Garden by Patrick Praetorius Applying the principles of a design methology called permaculture to school gardens and other projects helps to reinforce the values of resourcefulness, stewardship and sustainability. This article describes how Oak Grove School, in Ojai, California, applied the principles of permaculture to the creation of the school’s gardens, pond, straw bale greenhouse and outdoor seating area. Ethics in Action: Adopting an Environmental Practice by John P. Engel and Daniel Sturgis This environmental ethics assignment, adaptable to all grade levels, challenges students to align their behavior with environmental ethics, and in the process learn how much easier it is to change an old habit or adopt a new one than they thought Leaving ‘Leave No Trace’ Behind: Towards a Holistic Land Use Ethic by David Moskowitz and Darcy Ottey The authors question the notion that it is possible to live in the natural world without leaving a trace. They suggest we adopt Conscious Impact Living, a more holistic land use ethic that supports a stronger sense of connection with the natural world by recognizing that because humans live within nature, they will always have an impact on it. Designing a Sustainable Industrial Park by Robert A. Sweeney and Phyllis A. Sweeney This step-by-step exercise asks students to plan a “green” industrial park that generates its own energy, conserves and reuses materials, and minimizes waste. The exercise is adaptable for use with students from elementary grades through high school. Tank Tips: A Freshwater Aquarium in the Classroom by Rebecca Holcombe A freshwater aquarium in the classroom offers both aesthetic appeal and the opportunity to integrate learning across many disciplines. The article offers tips for selecting aquarium equipment and suitable freshwater tropical fish, and how to link an aquarium project to the curriculum in various subject areas. Reading the Landscape by Janice Schnake Greene The author shares teaching activities suitable for all age groups that help students become more aware of the natural world while still achieving required curriculum goals. Based on the writings of conservationist Aldo Leopold, the activities encourage students to learn the “stories of the land” through observation and inquiry. Habitat House Hunt by Kristin Mack-Hammer and Janice Denney In this activity for Grades 5-8, students play real estate agents who must find suitable homes for their urban animal clients. It is a fun way for students to learn about the importance of preserving wildlife habitats. Field Trips: The Good, Bad and the Ugly by Lisa Woolf One of the most common mistakes made by a classroom teacher when teaching outdoors is to forget to adapt the teaching techniques and structures used successfully in the classroom. This article recounts the story of a disastrous field trip, the lessons learned, and the steps taken to turn things around. It includes suggestions for making field trips meaningful learning experiences for students. Glorious Weeds! by Jack Greene The author shares ideas and activities designed to help students appreciate the value of common plants that are often dismissed as mere “weeds” And as always, over 20 new educational resources are profiled and evaluated in this issue of Green Teacher.
Keeping students engaged is the first step to learning. One thing we know for certain. A teacher standing at the head of the room, talking in front of a whiteboard to a class of students, is not going to keep students engaged in the classroom. #1: Always use a multisensory teaching approach. Children respond differently to the three different teaching styles which are visual, auditory or hands on through touch and feel. Keeping this in mind, a teacher needs to design a multisensory lesson plan for keeping students engaged in the classroom, with all learners benefiting from the three learning styles regardless of individual preference. An ideal lesson will encompass all three ways of presenting information with a short exposition as introduction, a little question and answer time, the use of diagrams, photos and colors and finally a hands on experience to reinforce the learning with practical exercises. #2: Consider educational resources designed for and by teachers. All teachers know that they will be less likely to have classroom management problems they achieve total student engagement and their students are focused on the task. While it is not always necessary that a student enjoys an assignment, there are many techniques and strategies a teacher may use to keep a student enthusiastic and interested. An eGuide by Dr Patricia Fioriello, Keeping Students Engaged in the Classroom is one of many useful resources providing helpful ideas and suggestions for teachers. You can get it free here. #3: Create technology classroom lesson plans. And of course one of the most exciting resources a teacher can access is computer technology. Children today are used to getting their information instantly, with a click of the mouse. Teachers must adapt to the world their students are living in and plan lessons accordingly, with shorter sections and frequent topic changes, keeping children connected and thus less likely to get bored and start misbehaving. More and more schools are able to afford technology and teachers with access to it must use it to full advantage. Student engagement improves markedly when it is time for computer class. The Internet provides a vast virtual community to interact, collaborate and work with. Taking advantage of this in schools, elementary school children start out by emailing classmates, forming pen pals with schools across the globe, or participating in safe classroom chat rooms. It is important at the outset that teachers instruct children about online etiquette, as well as safe Internet practices. Kids benefit from learning computer skills at many different levels. They can use word processing techniques for projects such as typing stories, writing poetry, project reports and book overviews. Since kids can easily make changes to word processing documents, more time can be spent verifying correct information and using creative techniques in their writing. Software programs which drill the student on math facts and language skills reinforce classroom curriculum and improve student engagement as programs can be individualized for the student’s pace of learning. Kids as young as first grade are learning a foreign language such as Spanish, utilizing podcasts developed by foreign language teachers. Using the internet allows students to learn the basics of finding information for classroom use and research for reports. A student with a laptop is more likely to be totally engaged with his or her immediate learning experience and the teacher’s challenge then becomes to ensure that the assignments are difficult enough to engage the high performers while providing help and support to students struggling to keep up. It may be helpful to use group instruction to determine which students need a concept repeated or reinforced, allowing other students to take the concept a step further. #4: Focus on cooperative learning classroom activities. The most obvious way of keeping students engaged in the classroom is to provide a stimulating environment that considers the needs of all the students. Lack of engagement or “dead time” can be replaced by active learning and active listening by creating an arsenal of routines and activities. #5: Understand the role of classroom management. Keeping kids engaged is not easy; consistency is key and classroom discipline must be established on day one so that students are quiet and attentive when the teacher has something to explain. Children quickly disengage if the teacher over explains or talks too long, leaving less time for the project in hand. The more the students work on their own, the more they learn and the more student engagement there will be in the learning process in the classroom. Keeping Students Engaged in the Classroom
Tickle to Remember Math Facts - Addition / Subtraction For School and Home (Reproducible) Tickle To Remember Math Facts (Addition/Subtraction or Multiplication/Division) can be taught to: The very young learner (classroom tested on 4 year olds). The student who does better when problems are in larger print.List Price: The student who is a tactile, kinesthetic, or global learner. Any student who needs a strategy when solving math problems. « Previous | Next »