content
stringlengths 275
370k
|
---|
In recent years, a number of people have advocated for the teaching of classical languages like Latin and Greek to make a comeback in the schools. Schools in Australia have tried this and seen amazing results on test scores in a variety of subjects.
So why does instruction in languages like Latin and Greek boost test scores? In 1911, Francis Kelsey explored the merits of Latin and Greek and came up with seven benefits that the languages provide for their students:
1. Trains in Scientific Method
“As an instrument of training in the essentials of a working method no modern language and no science is the equal of Latin, either in the number and variety of mental processes which may be stimulated with a minimum expenditure of effort, or in the ease and accuracy with which the results of those processes may be checked up, errors of observation or inference detected, and corrections made.”
2. Aids in knowledge of English
“The study of Latin and Greek contributes to the student’s command of English through the enlargement of his vocabulary, and the enrichment of it in synonyms expressing the finer shades of meaning; through his acquaintance with the original or underlying meanings of words, through his familiarity with the principles of word formation, and through the insight into the structure of the English language afforded by a mastery of the Latin.”
3. Deepens understanding of literature
As Kelsey explains, the works which remain in Latin and Greek are literary gems from which students can learn excellent writing and expression techniques:
“There is no page of a great master which does not yield to intensive study something more than a knowledge of words and constructions, something that will exert an influence, even if unperceived, toward the ideal in thought and expression.”
4. Gives insight into civilization
Just as knowledge of classical languages broadens our understanding of English, so knowledge of Greek and Latin also sheds light on the ideas and history which took place during the time these languages knew everyday use:
“As our language is rich in words of Greek and Roman origin, so the thoughts, practices, and ideals of daily life, when this rises above the bare necessities, reveal to the scrutinizing glance abundant elements that are part and parcel of an inheritance from classical antiquity.”
5. Cultivates imagination
Learning Latin and Greek opens up students to a deeper study of ancient history. This knowledge, in turn, provides more food for imagination, and by extension, more ideas for the student’s future life work:
“The man who has gained the power to picture accurately the scenes of ancient Athens and Rome will find it possible to combine in imagination the elements of a business situation in such a way as to seize opportunities and outflank his untrained competitors, or a lawyer will supply convincingly the missing link of evidence….”
6. Encourages morality
Many of the ancient works, Kelsey writes, have a heavy emphasis on virtues which we have lost, such as justice and loyalty. By learning the languages which these ancient works were written in, students have a greater opportunity to study these moral virtues without being distracted by modern theories and sometimes misleading commentary.
7. Provides recreation
As the education and knowledge of the mind is expanded through Greek and Latin, so too, is the knowledge and interest in other forms of play, entertainment, and hobbies:
“No studies lay a broader and surer foundation than do Greek and Latin for the appreciation of the things of the spirit in all forms of manifestation, whether in substance, as in the fine arts, or in less material media of expression.”
Interested in gaining these benefits, but clueless when it comes to Latin? Why not check out Getting Started with Latin: Beginning Latin for Homeschoolers and Self-Taught Students of Any Age?
Image Credit: Wknight94 bit.ly/19NgZNS
Annie Holmquist is the editor of Intellectual Takeout. When not writing or editing, she enjoys reading, gardening, and time with family and friends. |
In the fight against climate change, scientists have searched for ways to replace fossil fuels with carbon-free alternatives such as hydrogen fuel.
A device known as a photoelectrical chemical cell (PEC) has the potential to produce hydrogen fuel through artificial photosynthesis, an emerging renewable energy technology that uses energy from sunlight to drive chemical reactions such as splitting water into hydrogen and oxygen.
The key to a PEC’s success lies not only in how well its photoelectrode reacts with light to produce hydrogen, but also oxygen. Few materials can do this well, and according to theory, an inorganic material called bismuth vanadate (BiVO4) is a good candidate.
Yet this technology is still young, and researchers in the field have struggled to make a BiVO4 photoelectrode that lives up to its potential in a PEC device. Now, as reported in the journal Small, a research team led by scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the Joint Center for Artificial Photosynthesis (JCAP), a DOE Energy Innovation Hub, have gained important new insight into what might be happening at the nanoscale (billionths of a meter) to hold BiVO4 back.
“When you make a material, such as an inorganic material like bismuth vanadate, you might assume, just by looking at it with the naked eye, that the material is homogeneous and uniform throughout,” said senior author Francesca Toma, a staff scientist at JCAP in Berkeley Lab’s Chemical Sciences Division. “But when you can see details in a material at the nanoscale, suddenly what you assumed was homogeneous is actually heterogeneous – with an ensemble of different properties and chemical compositions. And if you want to improve a photoelectrode material’s efficiency, you need to know more about what’s happening at the nanoscale.”
X-rays and simulations bring a clearer picture into focus
In a previous study supported by the Laboratory Directed Research and Development program, Toma and lead author Johanna Eichhorn developed a special technique using an atomic force microscope at Berkeley Lab’s JCAP laboratory to capture images of thin-film bismuth vanadate at the nanoscale to understand how a material’s properties can affect its performance in an artificial photosynthesis device. (Eichhorn, who is currently at the Walter Schottky Institute of the Technical University of Munich in Germany was a researcher in Berkeley Lab’s Chemical Sciences Division at the time of the study.)
The current study builds on that pioneering work by using a scanning transmission X-ray microscope (STXM) at Berkeley Lab’s Advanced Light Source (ALS), a synchrotron user facility, to map out changes in a thin-film semiconducting material made of molybdenum bismuth vanadate (Mo-BiVO4).
The researchers used bismuth vanadate as a case example of a photoelectrode because the material can absorb light in the visible range in the solar spectrum, and when combined with a catalyst, its physical properties allow it to make oxygen in the water-splitting reaction. Bismuth vanadate is one of the few materials that can do this, and in this case, the addition of a small quantity of molybdenum to BiVO4 somehow improves its performance, Toma explained.
When water is split into H2 and O2, hydrogen-hydrogen and oxygen-oxygen bonds need to form. But if any step in water-splitting is out of sync, unwanted reactions will happen, which could lead to corrosion. “And if you want to scale up a material into a commercial water-splitting device, no one wants something that degrades. So we wanted to develop a technique that maps out which regions at the nanoscale are the best at making oxygen,” Toma explained.
Working with ALS staff scientist David Shapiro, Toma and her team used STXM to take high-resolution nanoscale measurements of grains in a thin film of Mo-BiVO4 as the material degraded in response to the water-splitting reaction triggered by light and the electrolyte.
“Chemical heterogeneity at the nanoscale in a material can often lead to interesting and useful properties, and few microscopy techniques can probe the molecular structure of a material at this scale,” Shapiro said. “The STXM instruments at the Advanced Light Source are very sensitive probes that can nondestructively quantify this heterogeneity at high spatial resolution and can therefore provide a deeper understanding of these properties.”
David Prendergast, interim division director of the Molecular Foundry, and Sebastian Reyes-Lillo, a former postdoctoral researcher at the Foundry, helped the team understand how Mo-BiVO4 responds to light by developing computational tools to analyze each molecule’s spectral “fingerprint.” Reyes-Lillo is currently a professor at Andres Bello University in Chile and a Molecular Foundry user. The Molecular Foundry is a Nanoscale Science Research Center national user facility.
“Prendergast’s technique is really powerful,” Toma said. “Often when you have complex heterogeneous materials made of different atoms, the experimental data you get is not easy to understand. This approach tells you how to interpret those data. And if we have a better understanding of the data, we can create better strategies for making Mo-BiVO4 photoelectrodes less vulnerable to corrosion during water-splitting.”
Reyes-Lillo added that Toma’s use of this technique and the work at JCAP enabled a deeper understanding of Mo-BiVO4 that would otherwise not be possible. “The approach reveals element-specific chemical fingerprints of a material’s local electronic structure, making it especially suited for the study of phenomena at the nanoscale. Our study represents a step toward improving the performance of semiconducting BiVO4-based materials for solar fuel technologies,” he said.
The researchers next plan to further develop the technique by taking STXM images while the material is operating so that they can understand how the material changes chemically as a photoelectrode in a model PEC system.
“I’m very proud of this work. We need to find alternative solutions to fossil fuels, and we need renewable alternatives. Even if this technology isn’t ready for the marketplace tomorrow, our technique – along with the powerful instruments available to users at the Advanced Light Source and the Molecular Foundry – will open up new routes for renewable energy technologies to make a difference.”
Co-authors with Toma and Eichhorn include Jason K. Cooper, David M. Larson, David Prendergast, Sebastian Reyes-Lillo, Shawn Sallis, Ian D. Sharp, Subhayan Roychoudbury, and Johannes Weis.
The work was supported by the DOE Office of Science and Berkeley Lab’s Laboratory Directed Research and Development program.
The Joint Center for Artificial Photosynthesis is a DOE Energy Innovation Hub. The Molecular Foundry and Advanced Light Source are DOE Office of Science user facilities co-located at Berkeley Lab.
Learn more at:
- “Splitting Water: Nanoscale Imaging Yields Key Insights,” news release, July 7, 2018
- “New Discovery Could Better Predict How Semiconductors Weather Abuse,” news release, July 5, 2016
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 13 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science. |
Welcome to The Euro Format Determining Place and Value of Decimal Numbers from Thousandths to Hundreds (Large Print) (D) Math Worksheet from the Place Value Worksheets Page at Math-Drills.com. This math worksheet was created on 2021-08-11 and has been viewed 1 times this week and 3 times this month. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math.
Teachers can use math worksheets as tests, practice assignments or teaching tools (for example in group work, for scaffolding or in a learning center). Parents can work with their children to give them extra practice, to help them learn a new math skill or to keep their skills fresh over school breaks. Students can use math worksheets to master a math skill through practice, in a study group or for peer tutoring.
Use the buttons below to print, open, or download the PDF version of the Euro Format Determining Place and Value of Decimal Numbers from Thousandths to Hundreds (Large Print) (D) math worksheet. The size of the PDF file is 16796 bytes. Preview images of the first and second (if there is one) pages are shown. If there are more versions of this worksheet, the other versions will be available below the preview images. For more like this, use the search bar to look for some or all of these keywords: place, values, math, worksheet, numbers, identifying, determining, determine, digits, whole, decimals, fillable, saveable, savable.
The Print button initiates your browser's print dialog. The Open button opens the complete PDF file in a new browser tab. The Download button initiates a download of the PDF math worksheet. Teacher versions include both the question page and the answer key. Student versions, if present, include only the question page.
This worksheet is fillable and savable. It can be filled out and downloaded or printed using the Chrome or Edge browsers, or it can be downloaded, filled out and saved or printed in Adobe Reader. |
Cells may be the basic unit of life, but their structures and functions differ greatly between living things. Plants are complex organisms and their cells contain many specialized organelles that carry out a variety of functions. Bacteria are simple, single-celled organisms. Bacteria organelles are fewer in number and less complex in design than plant organelles. Plant and bacteria cells share some basic structures that are necessary for cellular functions.
TL;DR (Too Long; Didn't Read)
Plant cells and bacterial cells both contain organelles that house DNA, produce proteins and provide support and protection to the cells. However, bacterial organelles are not membrane-bound.
Prokaryotes and Eukaryotes
Plants and animals are multicellular, eukaryotic organisms with cells that contain specialized organelles. Bacteria are single-celled, prokaryotic organisms. Eukaryotic cells are more complex than prokaryotic cells both in structure and function.
Bacterial cells have a simpler design but are larger in size than eukaryotic cells. Like plants and animals, bacteria must be able to carry out basic functions of life within their cells. Some of the same organelles are found in plant cells, animal cells and bacteria cells, including ribosomes, cytoplasm and cell membranes. All organisms require cellular structures that can:
- Store and manage genetic material.
- Synthesize proteins.
- Provide a medium that makes up the volume of the cell and allows movement of materials around the cell.
- Maintain the shape and integrity of the cell.
The Control Center
In plant cells, the nucleus contains DNA and controls the functions of the cell. The nucleus also contains another organelle – the nucleolus – which produces ribosomes. The nucleus and nucleolus are surrounded by the nuclear membrane.
Bacteria also have an organelle that contains DNA and controls the cell. Unlike the nucleus in plant cells, the nucleoid in bacterial cells is not held within a membrane. The nucleoid refers to an area in the cytoplasm where strands of DNA congregate. In bacteria, DNA form a single, circular-shaped chromosome.
The Protein Factory
Plant, bacteria and animal cells all have ribosomes that contain RNA and proteins. Ribosomes translate nucleic acids into amino acids to make proteins. Proteins form enzymes and play a role in every function within cells. Plant ribosomes are made of more strands of RNA than those in simpler bacterial cells.
Plant ribosomes are usually attached to the endoplasmic reticulum. Bacteria do not have this organelle, so the ribosomes float freely in the cytoplasm.
The Cellular Matrix
Cell organelles are suspended in a gelatinous material called cytoplasm that makes up most of the volume of the cell. In bacterial cells, the genetic material in the nucleoid and ribosomes, nutrients, enzymes and waste products float freely in the cytoplasm.
Plant organelles are suspended in cytoplasm, but each organelle is contained within a membrane. Specialized organelles store and transport nutrients, wastes and other materials around the cytoplasm.
Membranes and Walls
Both plant and bacterial cells are surrounded by a rigid cell wall. The cell was serves to protect the cells and give them shape. Cell walls in plant cells are made of cellulose and give structure to plant tissues.
The cell wall is especially important for bacteria because it protects the single-celled organisms from harsh environments and pressure differentials between the interior and exterior of the cell. Pathogenic bacteria have an additional exterior layer called the capsule that surrounds the cell wall. The capsule makes these bacteria more effective at spreading disease because they are more difficult to destroy.
Both plant and bacterial cells contain a cytoplasmic membrane. This layer lies interior to the cell wall and encases the cytoplasm and organelles.
About the Author
A.P. Mentzer graduated from Rutgers University with degrees in Anthropology and Biological Sciences. She worked as a researcher and analyst in the biotech industry and a science editor for an educational publishing company prior to her career as a freelance writer and editor. Alissa enjoys writing about life science and medical topics, as well as science activities for children |
What is Hydrocephalus?
Hydrocephalus comes from the Greek: “hydro” means water, “cephalus” means head. Hydrocephalus is an abnormal accumulation of cerebrospinal fluid (CSF) within cavities called ventricles inside the brain. CSF is produced in the ventricles, circulates through the ventricular system and is absorbed into the bloodstream. CSF is in constant circulation and has many important functions. It surrounds the brain and spinal cord and acts as a protective cushion against injury. CSF contains nutrients and proteins necessary for the nourishment and normal function of the brain. It also carries waste products away from surrounding tissues. Hydrocephalus occurs when there is an imbalance between the amount of CSF that is produced and the rate at which it is absorbed. As the CSF builds up, it causes the ventricles to enlarge and the pressure inside the head to increase.
Hydrocephalus that is congenital (present at birth) is thought to be caused by a complex interaction of environmental and perhaps genetic factors. Aqueductal stenosis and spina bifida are two examples. Acquired hydrocephalus may result from intraventricular hemorrhage, meningitis, head trauma, tumors and cysts. Hydrocephalus is believed to occur in about 2 out of 1,000 births. The incidences of adult-onset hydrocephalus and acquired hydrocephalus are not known.
How is Hydrocephalus treated?
There is no known way to prevent or cure hydrocephalus. The most effective treatment is surgical insertion of a shunt. Endoscopic third ventriculostomy (ETV) is growing in popularity as an alternative treatment method for hydrocephalus.
To learn more about Hydrocephalus, shunts, and ETV please visit www.hydroassoc.org
View information on these other topics:
* Notice: The information provided here is for informational, educational and entertainment purposes only. It is not intended to replace, and should not be interpreted or relied upon as, medical or professional advice. Your use of this site means that you agree to the terms and conditions detailed in our disclaimer. |
Preface PART I. AN OVERVIEW Chapter 1: Psychological Testing and Assessment Testing and Assessment Who, What, Why, and Where? Close-Up: Types of Computer-Generated Psychological Reports Everyday Psychometrics: "The Following Film is Rated PG-13"...But Whodunnit? How? And Why? Self-Assessment Chapter 2: Historical, Cultural, and Legal/Ethical Considerations A Historical Perspective Culture and Assessment Legal and Ethical Considerations Close-Up: Assessment, Admissions, and Affirmative Action: Grutter v. Bollinger et al. (2003) Everyday Psychometrics: Life-or-Death Psychological Assessment Self-Assessment PART II. THE SCIENCE OF PSYCHOLOGICAL MEASUREMENT Chapter 3: A Statistics Refresher Scales of Measurement Describing Data The Normal Curve Standard Scores Everyday Psychometrics: Consumer (of Graphed Data) Beware! Close-Up: The Normal Curve and Psychological Tests Self-Assessment Chapter 4: Of Tests and Testing Some Assumptions About Psychological Testing and Assessment What's a "Good Test"? Norms Correlation and Inference Inference from Measurement Everyday Psychometrics: Putting Tests to the Test Close-Up: Good Ol' Norms and the GRE Self-Assessment Chapter 5: Reliability The Concept of Reliability Reliability Estimates Using and Interpreting a Coefficient of Reliability Reliability and Individual Scores Close-Up: The Reliability of the Bayley-II Everyday Psychometrics: The Reliability Defense and the Breathalyzer Test Self-Assessment Chapter 6: Validity The Concept of Validity Content Validity Criterion-Related Validity Construct Validity Validity, Bias, and Fairness Close-Up: Base Rates and Predictive Validity Everyday Psychometrics: Adjustment of Test Scores by Group Membership: Fairness in Testing or Foul Play? Self-Assessment Chapter 7: Test Development Test Conceptualization Test Construction Test Tryout Item Analysis Test Revision Everyday Psychometrics: Psychometrics in the Classroom Close-up: Designing and Item Bank Self-Assessment PART III. THE ASSESSMENT OF INTELLIGENCE 8. Intelligence and Its Measurement What Is Intelligence? Measuring Intelligence Intelligence: Some Issues A Perspective Everyday Psychometrics: Being Gifted Close-up: Culture Fair/Culture Loaded Self-Assessment Chapter 9: Tests of Intelligence The Stanford-Binet Intelligence Scales The Wechsler Tests Other Measures of Intelligence Close-up: Factor Analysis Everyday Psychometrics: The Armed Services Vocational Aptitude Battery (ASVAB): A Test You Can Take Self-Assessment Chapter 10: Preschool and Educational Assessment Preschool Assessment Achievement Tests Aptitude Tests Diagnostic Tests Psychoeducational Test Batteries Other Tools of Assessment in Educational Settings Everyday Psychometrics: First Impressions Close-up: Tests of Minimum Competency Self-Assessment PART IV. THE ASSESSMENT OF PERSONALITY Chapter 11: Personality Assessment: An Overview Personality and Personality Assessment Defined Personality Assessment: Some Basic Questions Developing Instruments to Assess Personality Personality Assessment and Culture Everyday Psychometrics: Some Common Item Formats Close-up: Assessing Acculturation and Related Variables Self-Assessment Chapter 12: Personality Assessment Methods Objective Methods Projective Methods Behavioral Assessment Methods A Perspective Everyday Psychometrics: Confessions of a Behavior Rater Close-up: Personality, Life Outcomes, and College Yearbook Photos Self-Assessment PART V. TESTING AND ASSESSMENT IN ACTION Chapter 13: Clinical and Counseling Assessment An Overview The Interview Case History Data Psychological Tests Special Applications of Clinical Measures The Psychological Report Close-up: Assessment of Dangerousness and the Secret Service Everyday Psychometrics: Elements of a Typical Report of Psychological Assessment Self-Assessment Chapter 14: Neuropsychological Assessment The Nervous System and Behavior The Neuropsychological Examination Close-up: Fixed Versus Flexible Neuropsychological Test Batteries and the Law Everyday Psychometrics: Medical Diagnostic Aids and Neuropsychological Assessment Self-Assessment Chapter 15: The Assessment of People with Disabilities An Overiview Assessment and Specific Disabilities Everyday Psychometrics: Public Law 105-17 and Everyday Practice Close-up: Expert Testimony Self-Assessment Chapter 16: Assessment, Careers, and Business Career Choice and Career Transition Screening, Selection, Classification, and Placement Productivity, Motivation, Attitude, and Organizational Culture Other Applications of Tools of Assessment Close-up: Validity Generalization and the GATB Everyday Psychometrics: Assessment of Corporate and Organizational Culture Self-Assessment References Credits Name Index Glossary/IndexCohen, Ronald Jay is the author of 'Psychological Testing and Assessment An Introduction to Tests and Measurement', published 2004 under ISBN 9780072887679 and ISBN 0072887672. |
- This event has passed.
The Significance of the Qing Period for China-India Relations
November 18, 2019 - 15:00-16:30
For much of history, China and India were connected chiefly through religious and economic ties. Today, their relationship is one between powerful states sharing a long border, in which military and diplomatic relations are particularly salient. The period of the Qing Empire (1644-1912) marks a transitional stage in this relationship, as Manchu emperors pushed their sphere of control to the Himalayas and Pamirs, while British imperial officials solidified their grasp on the southern edge of these ranges. This talk explores the evolving India-China relationship in the Qing period from the perspective of rulers in Beijing. Geopolitically, it explains why the initial expansion of the Qing frontier toward India had little to do with concern about the subcontinent, but was rather driven by rivalries with Inner and Central Asian powers. Intellectually, it considers the manifold sources available in Beijing about historical and contemporary India, and the ways officials and scholars attempted to make sense of them. Against this background it traces the emerging Qing recognition of British India as a near and formidable power, concluding with a reflection on the legacy bequeathed to the present by the meeting of Qing and British imperialism in the nineteenth century.
For more information and registration, please see here. |
Silver long cross penny of Henry III
Plantagenet, struck in the period 1247-1272
The long cross coinage was introduced in 1247 in an effort to solve the problem of coin clipping. This practice, whereby dishonest individuals would clip silver from the edges for profit, resulted in underweight coins circulating and public disatisfaction with the coinage. Long cross coins were similar to the preceding short cross coinage but had a long cross on the reverse that extended to the edge of the coin, breaking the legend. This helped to reduce clipping because for the first time the public knew how big the coin should be. This example was struck by the moneyer Nicole at the Winchester mint. |
Each of the readings for this week focused on the organism’s (human’s) relationship to its environment and the totality of the system–what Bateson calls “organism plus environment” (p. 455). In Form, substance, and difference, Bateson investigates the overlap between formal premises and actual behavior. At the foreground of this investigation is the question of survival. He begins by examining traditional approaches to the relationship, namely Darwin’s theory of evolution that posits natural selection as the primary unit of survival (p. 456. This theory, Bateson argues, leads to the destruction of environment and thus the destruction of the organism. Rather than the strongest or most unyielding of organisms, Bateson posits, a flexible organism in the environment is the unit of survival.
Crucial for understanding this position is Bateson’s conception of Mind. Discoveries of cybernetics, systems theory, and information theory have created a shift in epistemology that no longer positions mind as the explanation–instead, it is the thing that must be explained (p. 456). At the center of this explanation is the importance of difference, which Bateson identifies as synonymous with “idea” and result in “effects” (p. 458). From this perspective, it is a difference that makes a difference that serves as the elementary unit of information. We perceive differences through both internal and external pathways that help us map the environment. While other theorists have created a dichotomy between mind and substance, Bateson’s approach considers both.
An organism adapts to the environment through a transformation resulting in behaviors of trial and error that are coded and transmitted through internal and external pathways. In this sense, then, he explains that “if you want to explain or understand anything in human behavior, you are always dealing with total circuits, completed circuits. This is the elementary cybernetic thought” (p. 465). The mental system demonstrates the characteristic of trial and error and, according to Bateson, “the way to delineate the system is to draw the limiting line in such a way that you do not cut any of these pathways in ways that leave things inexplicable” (p. 465). In this way, Mind is synonymous with cybernetic system–”the relevant total information-processing trial-and-error completing unit” (p. 466). From this perspective, Mind is part of the ecosystem and Bateson argues “the individual mind is immanent but not only in the body. It is immanent also in pathways and messages outside the body” (p. 467). Human survival, then, is the result of the mind’s ability to code and transform the body’s relationship to the environment.
This relationship is further explored by Gibson‘s theory of affordances. Gibson argues that objects in the environment afford human actions. For example, a solid, flat ground affords walking and a small rock affords throwing. He points out, however, that an object’s physical properties must be measured relative to the individual animal as an affordance of support for a species.
Throughout the chapter, Gibson explains the concept of affordances, acknowledging that other animals and humans also serve as affordances–indeed, social interactions rely on an individual’s perception (or misperception) of what another individual affords him or her. He identifies existing concepts of environment, such as an ecologist’s niche and how those correlate with his theory of affordances (a “nice is a set of affordances”). Objects can have multiple affordances, so Gibson emphasizes the importance of understanding that to perceive an affordance is not to classify an object–the potential multiplicity of affordances actually complicates the ability to classify objects. Additionally, positive and negative affordances are determined by their effects (beneficial or dangerous) rather than an individual’s level of enjoyment of the experience.
In the final reading, Norman expands Gibson’s theory of affordances to graphical design, focusing on what he calls perceived affordances: the perception that a meaningful, useful action is afforded by the design. In product design, her argues, real and perceived affordances “need not be the same.” Physical affordances exist–keyboards, computer screens, etc.–but perceived affordances determine whether or not users recognize the availability of affordances.
Norman also identifies the importance of cultural constraints and cultural conventions when designing perceived affordances. Designers must operate within the cultural understandings of how results are afforded by certain actions, even though many of those actions are arbitrary (a horizontal scroll bar, for example). He concludes by offering four principles for screen interfaces:
- Follow conventional usage, both in the choice of images and the allowable interactions
- Use words to describe the desired action
- Use metaphor (even though he believes this can be harmful)
- Follow a coherent conceptual model so that once part of the interface is learned, the same principles apply to other parts
Affordances: what the environment offers (provides or furnishes) to an animal; implies complimentarity of the animal and the environment (Gibson)
Attached objects: items that cannot be removed from earth without breakage (Gibson)
Barriers: environmental objects that do not afford locomotion (Gibson)
Bioenergenetics: economy of energy and materials; boundaries at the cell membrane or skin measurements; additive or subtractive (Bateson, p. 466)
Creatura: Jung’s second world of exploration–effects are brought about by difference (Bateson, p. 462)
Cultural constraints: learned conventions that are shared by a cultural group that limits design/action (Norman)
Detached objects: items that can be removed from earth without breakage (Gibson)
Economics of information: budgeting of pathways and probability; budgets are fractionating rather than additive or subtractive; boundaries enclose rather than cut off pathways
External pathways: travel of information (perceived differences) by propagation of light or sound to the sensory organs that transform it to internal pathways (Bateson, p. 459)
Internal pathways: travel of information energized by the metabolic energy latent in the protoplasm which receives, recreates, or transforms it and passes it on (Bateson, p. 459)
Mental System: the unit which shows the characteristic of trial and error (Bateson, p. 465)
Niche: a set of affordances (Gibson)
Perceived affordance: the perception that an action is possible (Norman)
Plemora: Jung’s first world of exploration–events are caused by forces and impacts; there are no distinctions or differences (Bateson, p. 462)
Principle of occluding edges: law of reversible occlusion, reliant on opaque and nonopaque surfaces. Afford hiding (Gibson)
It seems that all three of these readings focus on the idea of totality–the survival of the ecosystem is dependent on the relationships within it. Bateson and Gibson both argue that human actions that center on the survival of humans will destroy the ecosystem. As humans manipulate and adapt the environment to make it easier for them to survive, they change the ecosystem to the detriment of the organisms within it and, eventually themselves. Although Norman is talking about graphic design, there seems to be a similar idea. We must design within the cultural constraints or conventions or the product will not be used–the system will fail.
While I didn’t fully understand all of Bateson’s concepts, I found the argument about fractionality interesting. It seems that he’s saying we cannot add to or subtract from an environment. Instead, the environment exists as is and all of the elements within it are fractions of it, including how information is coded, processed, and transmitted. Behaviors are complete systems of activity and the environment is a part of that system.
Ecological affordances: http://youtu.be/Ekm60F4AD3Y
Ecology of Mind Revisited: http://www.earthzine.org/2010/10/21/a-re-introduction-to-ecology-of-mind/
Don Norman’s website: http://www.jnd.org/
Bateson, G. (1987). Form, substance, and difference. Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology (pp. 454-471). Northvale, NJ: Jason Aronson Inc.
Gibson, J. J. (1986). The theory of affordances. The ecological approach to visual perception. Hillsdale, NJ: Lawrence Erlbaum Associates.
Norman, D. (n.d.). Affordances and design, Retreived from https://docs.google.com/file/d/0BzIskzHsjKsRN0NRRktncjBGb1U/edit |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Annexation, a formal act whereby a state proclaims its sovereignty over territory hitherto outside its domain. Unlike cession, whereby territory is given or sold through treaty, annexation is a unilateral act made effective by actual possession and legitimized by general recognition.
Annexation is frequently preceded by conquest and military occupation of the conquered territory. Occasionally, as in the German annexation of Austria in 1938 (see Anschluss), a conquest may be accomplished by the threat of force without active hostilities. Military occupation does not constitute or necessarily lead to annexation. Thus, for instance, the Allied military occupation of Germany after the cessation of hostilities in World War II was not followed by annexation. When military occupation results in annexation, an official announcement is normal, to the effect that the sovereign authority of the annexing state has been established and will be maintained in the future. Israel made such a declaration when it annexed the Golan Heights in 1981, as did Russia following its annexation of the Ukrainian autonomous republic of Crimea in 2014. The subsequent recognition of annexation by other states may be explicit or implied. Annexation based on the illegal use of force is condemned in the Charter of the United Nations.
Conditions may exist which obviate the necessity for conquest prior to annexation. In 1910, for example, Japan converted its protectorate of Korea into an annexed colony by means of proclamation. Preceding its annexation of the Svalbard Islands in 1925, Norway eliminated its competitors by means of a treaty in which they agreed to Norwegian possession of the islands. Annexation of Hawaii by the United States in the late 19th century was a peaceful process, based upon the willing acceptance by the Hawaiian government of U.S. authority.
The formalities of annexation are not defined by international law; whether it be done by one authority or another within a state is a matter of constitutional law. The Italian annexation of Ethiopia in 1936 was accomplished by a decree issued by the king of Italy. Joint resolutions of Congress were the means by which the United States annexed Texas in 1845 and Hawaii in 1898. See also conquest.
Learn More in these related Britannica articles:
Anschluss, political union of Austria with Germany, achieved through annexation by Adolf Hitler in 1938. Mooted in 1919 by Austria, Anschluss with Germany remained a hope (chiefly with Austrian Social Democrats) during 1919–33, after which Hitler’s rise to power made it less attractive.…
India: The completion of dominion and expansionThe British feared to annex outright a region full of former soldiers and wished to retain a buffer state against possible attack from the northwest. By the Treaty of Lahore they took Kashmir and its dependencies, with the fertile Jullundur (now Jalandhar) area, reduced the regular army to 20,000…
Crimea: Crisis in Crimea…the legitimacy of the Russian annexation, and the United Nations repeatedly affirmed that Crimea remained an integral part of Ukraine. In the eyes of international law, Russia was designated the “occupying power” in Crimea, and Moscow was not regarded as having any legal claim to the peninsula. The annexation of… |
Amblyopia, commonly known as ‘lazy eye,’ is a neuro-developmental vision condition that begins in early childhood, usually before the age of 8.
Lazy eye develops when one eye is unable to achieve normal visual acuity, causing blurry vision in the affected eye—even when wearing glasses. Left untreated, amblyopia can lead to permanent vision loss in one eye.
It’s important to understand that a lazy eye isn't actually lazy. Rather, the brain doesn’t process the visual signals from the ‘lazy’ eye. Eventually, the communication between the brain and the weaker eye deteriorates further, potentially leading to permanently reduced vision in that eye. Fortunately, vision therapy can improve the condition by training the brain to work with both eyes equally.
What Causes Lazy Eye?
When the neural connections between the eyes and the brain are healthy, each eye sends a visual signal to the brain. The brain combines these two signals into one clear image, allowing us to properly see what we are looking at.
In the case of amblyopia, the brain doesn’t recognize the weaker eye's signals. Instead, it relies only on the visual input from the stronger eye.
Amblyopia can be caused by strabismus, anisometropia and deprivation.
Strabismus occurs when the eyes are misaligned and point in different directions. The most common cause of amblyopia is eye misalignment, which causes the brain to receive two images that cannot be combined into one single, clear image.
A child's developing brain cannot process images when both eyes are not aligned in the same direction, so it ‘turns off’ the images sent by the weaker eye. This is the brain's defense mechanism against confusion and double vision.
As the brain ‘turns off’ the weaker eye, this eye will eventually become 'lazy'—unless treatment is provided.
Anisometropia is when the refractive powers (visual acuity) of your eyes differ markedly, causing your eyes to focus unevenly - rendering the visual signal from one eye to be much clearer than the other. The brain is unable to reconcile the different images each eye sends and chooses to process the visual signal from the eye sending the clearer image. The brain begins to overlook the eye sending the blurrier image, further weakening the eye-brain connection of the weaker eye. If not treated, this results in permanent poor vision in that eye.
Deprivation refers to a blockage or cloudiness of the eye. When an eye becomes cloudy, it directly impacts the eyes’ ability to send a clear image to the retina, harming the child's ability to see images clearly from that eye. When clear images can't reach the retina, it causes poor vision in that eye, resulting in amblyopia. Deprivation is by far the most serious kind of amblyopia, but it is also incredibly rare.
There are several types of deprivation: cataracts, cloudy corneas, cloudy lenses and eyelid tumors. Each of these can affect a child's vision, resulting in amblyopia. Because these are also difficult to notice from a child's behavior, it's crucial to have your child tested for eye-related problems so that treatment can begin right away.
How To Treat Amblyopia
The goal of most amblyopia treatments is to naturally strengthen the weaker eye so that your child's eyes can work and team with the brain more effectively. Amblyopia treatment will be determined by the cause and severity of their condition.
Common types of treatment include:
- Corrective eyewear
- Eye drops
- Vision Therapy
Vision therapy is the most effective treatment for amblyopia, which may be used in conjunction with other treatments.
A vision therapy program is customized to the specific needs of the patient. It may include the use of lenses, prisms, filters, occluders, and other specialized equipment designed to actively make the lazy eye work to develop stronger communication between the eye and the brain.
Vision therapy is highly successful for the improvement of binocular vision, visual acuity, visual processing abilities, depth perception and reading fluency.
Vision therapy programs for amblyopia may include eye exercises to improve these visual skills:
- Accommodation (focusing)
- Binocular vision (the eyes working together)
- Fixation (visual gaze)
- Pursuits (eye-tracking)
- Saccades (eye jumps)
- Spatial skills (eye-hand coordination)
- Stereopsis (3-D vision)
Contact Grand Developmental Vision Institute to make an appointment and discover how vision therapy can help improve your child’s vision. Our eye doctor will ask about your child’s vision history, conduct a thorough evaluation, and take your child on the path to effective and lasting treatment.
- A: It’s difficult to recognize lazy eye because the condition usually develops in one eye, and may not present with a noticeable eye turn. As such, children generally learn how to ignore the lazy eye and compensate by mainly relying on the sight from the ‘good’ eye. Some symptoms of lazy eye include:
- - Closing one eye or squinting
- Difficulty with fine eye movements
- Poor depth perception
- Poor eye-hand coordination
- Reduced reading speed and comprehension
- Rubbing eyes often
- A: Your child’s eye doctor will conduct specific tests during their eye exam, to assess the visual acuity, depth perception and visual skills of each eye.
Grand Developmental Vision Institute serves patients from Winnipeg, Selkirk, Portage La Prairie, and Brandon, all throughout Manitoba. |
Renewable energy Fonda
Renewable energy is the future of electronic technology. It is clean, safe, and sustainable. Types of renewable energy are solar, wind, water, and biomass. This technology is used to generate electricity, heat, or motive power.
Our online course can take you through the basics of types of renewable energy and the technology used to generate it. You'll learn about:
- Solar Energy: Solar panels collect sunlight and convert it into electricity.
- Wind Energy: Wind turbines harness the power of the wind to generate electricity.
- Water Energy: Hydroelectric dams capture the energy of moving water to generate electricity.
- Biomass Energy: Biomass power plants burn organic material to generate electricity.
Renewable energy is a critical part of the fight against climate change. Burning fossil fuels like coal, oil, and natural gas release greenhouse gases into the atmosphere, trapping heat and raising the Earth's temperature. This "global warming" can lead to more extreme weather, droughts, floods, and hurricanes. It also threatens the habitats of plants and animals around the world.
Renewable energy doesn't produce greenhouse gases, so it's a vital part of the solution to climate change. And as renewable energy technology gets more advanced and less expensive, it's becoming an increasingly viable option for individuals, businesses, and governments.
There are many types of renewable energy, but they all have one thing in common: they derive from natural processes that are continually replenished. That means we can never "run out" of renewable energy sources, unlike fossil fuels.
Renewable energy facts:
-It is a clean energy source that does not produce greenhouse gases or other pollutants.
-It is a sustainable source of energy that can be used indefinitely.
-It is a renewable source of energy that can be replenished.
If you are ready to become a renewable energy expert, ᐅ sign up for our class today.
Are you or do you know a Renewable energy ♻️ 🔋 ☀️ in Fonda? Add a company for free |
Vol.29, No.01. 2018
Table of Contents
The impacts of global changes on the natural environment of the polar regions have been observed and investigated by scientists over recent decades. For example, the melting of glaciers and the changes induced in both the global carbon cycle and the freeze–thaw cycle of the polar regions due to global warming have exerted pressure on the polar environment. Furthermore, the warming climate, changes in hydrological conditions, and changes to the rates of methane and nitrous oxide production and respiration in different types of soil have also constituted a challenge to polar region ecosystems.
Environmental changes in the polar regions, especially Antarctica, have been influenced by very few anthropogenic factors because of the remoteness of these areas. However, heavy metal elements, persistent organic pollutants, and organophosphorus esters have been found in polar regions following transportation via atmospheric and oceanic circulations, and even by human activity. All the effects related to both natural environmental changes and human activities are cumulative, doubling the challenges faced by the polar ecosystem.
The polar environment ecosystem includes invertebrates, birds, mammals, algae, land plants, lichens, and microorganisms. The ecosystem response to polar environmental change includes the appearance of new species of microorganisms, alteration to the habits and population dynamics of plants and animals, and variation in biodiversity. Because of the minimal impact of human activity in polar regions, polar ecosystem response to environmental change is very significant for scientific research because polar life is recognized as a bioindicator of global change.
Over recent decades, research scientists from around the world have paid increasing attention to the investigation and monitoring of environmental changes in polar regions. In particular, Chinese scientists have achieved considerable progress in the research fields of the polar atmosphere, oceans, glaciers, and marine life. We would like to gather and publish these results in this special issue of Advances in Polar Science. For this issue, we have two reviews and six articles contributed by Chinese researchers and their international collaborators.
The first review focuses on Antarctic birds and mammals, i.e., penguins and seals, and it highlights the need for long-term, continuous monitoring and investigation to assess the impact of climate change on these species. Another one reports the discovery of organic pollutant organophosphorus esters on the Antarctic Ice Sheet. The first article reports the creation of 13 permanent plots on Fildes Peninsula, King George Island, Antarctica, which are used as a network for long-term monitoring of vegetation, including Deschampsia Antarctica (a native vascular plant), mosses, lichens, and microorganisms. This network is serving as a platform for multidisciplinary Antarctic research studies including botany, microbiology, ecology, and environmental science. After studies of diversity and population characteristics, the second and third articles address an important discovery that cultured fungi of the Arctic aquatic environment could be used as prototype drugs for medicinal proposes. The nucleotide differences of the mbf1 gene in the lichenized fungus Umbilicaria decussate from the Antarctic, Arctic Regions, and extra-polar Armenia were compared and the data obtained proven useful in further polar studies. The remaining three articles consider the serious situation of pollution in the polar regions as a warning. These important works highlights the influence of the temperature, nutrients, and moisture of soil in the High Arctic in summer on CO2 fluxes, ecosystem respiration, and net ecosystem exchange. The rates of methane and nitrous oxide production and respiration from different soils are also reported, and their relationships with the activity of the freeze–thaw cycle of the coastal Antarctic tundra are indicated. In the final article, studying heavy metal elements, two pre-treatment methods for mercury stable isotope analysis are introduced and their use with Antarctic moss demonstrated successfully. |
Adaptation to upcoming conditions may include changes in fertility, said a new study by an international group of researchers, who examined the economic channels through which climate change could affect fertility, including sectoral reallocation, the gender wage gap, longevity, and child mortality.
They found that through its economic effects, climate change will affect fertility, based on how much time and money will be used for child rearing. It also depended on choosing between using the resources for more children, or focus it all on the future of each child.
Colombia and Switzerland were the examples they have examined to study how demographic effects of climate change might differ across locations and between richer and poorer countries.
Two stages of life were considered -- childhood and adulthood. Parents must decide how to divide limited resources between supporting current family consumption, having children, and paying for each child's education. Children's future income depends on parental decisions of the present, said the study.
Lead author Gregory Casey, from Williams College in Massachusetts said that global temperature rise will affect agricultural and non-agricultural sectors differently. "Near the equator, where many poorer countries are, climate change has a larger negative effect on agriculture," which they said would lead to scarcity of agricultural goods, higher agricultural prices and wages.
Ultimately, it ends up in a labour reallocation, since agriculture makes less use of skilled labour. Essentially, climate change reduces the return on learning skills, making parents invest lesser in the education of each child, and to increase fertility, said the study. However, the study found that the pattern was reversed at higher latitudes.
Co-author Soheil Shayegh from Bocconi University in Milan, Italy, said that the study suggests climate change may worsen inequalities by reducing fertility and increasing education in richer northern countries, while increasing fertility and reducing education in tropical countries.
"This is particularly poignant, because those richer countries have disproportionately benefited from the natural resource use that has driven climate change" he stated.
Finally, Casey clarified: "Our model only deals with a single economic channel, so it is not intended to give a complete quantitative account of the impact of climate change on demographic outcomes. Further work is needed on other economic channels, especially those related to health."
The study was published in the journal Environmental Research Letters. |
Between 1868 and 1898, three wars for Cuba’s independence from Spain were fought. Although the first two—the Ten Years’ War (1868-1878) and La Guerra Chiquita (1878-79)—were not successful, they laid the foundation for the Cuban War of Independence (1895-8), which gained the country independence from Spain. The revolutionaries, generals, and politicians active in this period had a strong impact on these wars, as well as Cuba’s history after independence, and were celebrities across the world.
The items in this collection are portraits of Cuban revolutionaries and notable public figures, as well as some international celebrities, which were taken from the Havana-based newspaper La Caricatura between 1895 and 1912. Some of these figures are very famous, such as Jose Marti and Antonio Maceo. Others, such as J.A.D. McCurdy and Dr. Diego Tamayo Figueredo, are less well-known despite their contributions to history. The collection also includes cartoons and propaganda items from the time period.
These items were donated by the Manteiga family. |
Getting rid of Organic Solutes
There are two mechanisms by which activated carbon removes contaminants from water. The first one is called adsorption, which is not to be confused with absorption. The second one is called catalytic reduction.
Adsorption vs Absorption
Since the spelling is very similar people often confuse these two terms. When you clean up some spilled water with a paper towel, the moisture is absorbed, and travels inside the actual structure of the paper fibers. That doesn’t happen with activated carbon.
Activated carbon uses adsorption, a principle which is much more like flypaper because unwanted chemicals stick to its surface. When a housefly gets close to flypaper, the color, odor, and shininess draw it in, and when it lands it becomes permanently stuck on the surface. Activated carbon attracts the nasty organic compounds, too, and because it has such a large amount of surface compared to its size, it can repeat this action millions of times.
This reaction is less important to us. This is the method by which activated carbon reduces chlorine-type compounds in the water. But as you will see below, our shower filters are designed in such a way that those compounds are completely neutralized before they get to the carbon filter. By not using this capability, it extends the functional life of the activated carbon filter.
Why use an activated carbon filter?
Chlorine is a well-studied, well-understood, and well-tolerated disinfectant used to kill pathogens in the water supply. CBPs (Chlorination By-Products) such as trihalomethanes (THM) have been identified as possibly being carcinogenic, but that has not yet been proven.
Some 20% of municipalities have been experimenting with chloramine as a substitute to meet the EPA guidelines for reduced CPBs, primarily because it is very inexpensive. There are many methods to reduce CPBs, the simplest of which is pre-filtering out organic matter so that the chlorine has nothing to react with.
Chloramine has been linked to leaching of lead from old pipes, and increasing lead exposure, which particularly affects children and their mental development. More importantly, very few studies have been done on how chloramine affects respiratory, skin, or digestion in human beings of any age. Municipalities are now searching for alternative methods to reduce CPBs that are safe for humans. Chlorine is back in fashion because it is well understood.
Information like this just makes it that much more important to eliminate chlorine and the CBPs which accompany it. In countertop water filtering pitchers, typically you will find that activated carbon is the go-to solution. When used for low-volume, cold water treatment, it works very well, provided you change the filters regularly.
Granulated Activated Carbon
Granulated Activated Carbon (GAC) filters can remove 70-90% of chlorine from water, but they have a limited capacity to do so. They tend to clog and fail particularly quickly in hot water (such as your shower) when exposed to chlorine.
GAC’s greatest weakness is that it is most effective and durable only in cold water; hot water quickly exhausts its effectiveness for chlorine, so water is best dechlorinated before it gets to the carbon element.
For this reason the water (in our filters) always passes through an insoluble pellet bed of calcium sulfite first, which converts 100% of the free-chlorine into harmless, non-toxic materials (chlorides) that are safe for humans. This considerably extends the life of the GAC filter, allowing it to do the job it is truly best suited for.
One of the GAC’s greatest strengths is its ability to remove dissolved organic chemicals such as pesticides, fertilizers, hydrogen sulfide (that rotten egg smell), dissolved drugs (leftover pharmaceuticals that people have flushed down the toilet instead of returning to the drugstore for proper disposal), and strange tastes or odors.
GAC filters are particularly effective because of a characteristic they possess called microporosity. Just 1 gram of activated carbon, or about 1/28th of an ounce, has a molecular surface area in excess of 32,000 ft2, or 3,000 m2. How big is that? It is the equivalent of more than 11 doubles-tennis courts, or an NHL hockey rink—a truly massive area. Organic chemicals are attracted to the surface of the activated carbon and are permanently trapped.
What It Won’t Do
While activated carbon filters are stunningly effective when they are used to remove Volatile Organic Compounds (VOCs) and sediments, when it comes to inorganic compounds, they don’t do very much at all. By definition, organic compounds are those that contain carbon.
Carbon has an affinity for carbon, hence the reason that we use an activated carbon filter. If you were to take a volume of saltwater and put it through a water filtration pitcher it would still be saltwater.
Really Easy Chemistry
Salt is composed of two different atoms bound closely together. Separately, these two substances are toxic to human beings, but when they connect they become harmless table salt. They are sodium (Na) and chlorine (Cl) and together make NaCl (sodium chloride). As you can see, there is no carbon (C) so it will just pass right through the filter, unaffected.
This is the reason we use multistage filters. Each individual segment has its specialized job to do.
Now you know the secret of how activated carbon removes those strange flavors and smells, and how it protects you from dermatitis and dandruff because of undesirable chemicals like chlorine and its byproducts.
There are just a couple more things you need to know. Right now there’s a new fad coming from manufacturers in Japan and Korea. They are making shower filters loaded with vitamin C (ascorbic acid/sodium ascorbate).
They point out that it is the most effective way to remove chloramine from the water supply. While true, the odds are 80% that your water is treated with chlorine, so that expensive solution is not for you. You do not want to be replacing those cartridges every month, especially if you don’t have chloramine. If you contact your water utility and ask, they’ll tell you whether you have chlorine or chloramine treated water.
Vitamin C does indeed reduce chloramine in your shower water, but it doesn’t remove any of the other volatile organic chemicals. The other problem is that the vitamin C cartridge dissolves over the course of time, and there’s no way to tell that it is exhausted—that the vitamin C is all gone.
The last consideration is that the human body is normally a little bit alkaline when we’re completely healthy. Vitamin C is an acid, and you should ask yourself if you want to be showering in acidic water.
We are big fans of real science here. We don’t need to scare you to make you use our products; and we don’t need to deceive you so that you don’t use someone else’s product. We just tell you the truth, because we believe you are smart enough to make your own good decision! |
Whales, dolphins and porpoises make up the classification order Cetacea, which contains two suborders, Mysticeti and Odontoceti. The baleen whales are members of the Mysticeti suborder, while the toothed whales, dolphins and porpoises make up the suborder Odontoceti. Altogether, the two suborders contain eighty-one known species, separated into thirteen different families. In each family are a number of species, each classified further into ‘sub-families’, or genera, of which there are 40. What Are Cetaceans? There are many misconceptions about cetaceans (whales, dolphins and porpoises), the most common of which is the idea that cetaceans are fish. They’re not – they are mammals, like you and me.
Millions of years ago, they lived on land; their bodies were covered in hair, they had external ears, they walked on four legs, they beared live young. As mammals, cetaceans have these characteristics that are common to all * They are warm-blooded animals. * They breathe in air through their lungs. * They bear their young alive and suckle them on their own milk. * They have hair – though generally only a few ‘whiskers’.
Another way of discerning a cetacean from a fish is by the shape of the tail. The tail of a fish is vertical and moves from side to side when the fish swims. The tail of a cetacean is horizontal and moves up and down instead. The Cetacean’s Adaptations for Sea Life Over a period of millions of years, the cetacean returned to the sea – there was more food there, and more space than on land.
Because of this increase in space, there was no natural limit to the cetacean’s size (i.e. the amount of weight its legs could hold) since the water provided buoyancy. It had no longer any need for legs. During this time, the cetacean lost the qualities that fitted it for land existence and gained new qualities for life at sea.
Its hind limbs disappeared, its body became more tapered and streamlined – a form that enabled it to move swiftly through the water. For the same reason, most of its fur disappeared, reducing the resistance of the giant body to the water. The cetacean’s original tail was replaced by a pair of flukes that acted like a As part of this streamlining process, the bones in the cetacean’s front limbs fused together. In time, what had been the forelegs became a solid mass of bone, blubber and tissue, making very effective flippers that balance the cetacean’s tremendous bulk. After the cetacean’s hair disappeared, it needed some way of preserving their body heat. This came in the form of blubber, a thick layer of fat between the skin and the flesh that also acts as an emergency source of energy.
In some cetaceans the layer of blubber can be more than a foot Breathing, Seeing, Hearing and Echolocation Since the cetacean is a mammal, it needs air to breathe. Because of this, it needs to come to the water’s surface to exhale its carbon dioxide and inhale a fresh supply of air. Naturally it cannot breathe under water, so as it dives a muscular action closes the blowholes (nostrils), which remain closed until the cetacean next breaks the surface. When it does, the muscles open the blowholes and warm air is exhaled. To make this easier, the cetacean’s blowholes have moved to the top of its head, giving it a quicker chance to expel the stale air and inhale fresh air.
When the stale air, warmed from the lungs, is exhaled it condenses and vapourises as it meets the cold air outside. This is rather like when you breathe out on a cold day and a small cloud of warm air appears. This is called the ‘blow’, or ‘spout’, and each cetacean’s blow is different in terms of shape, angle and height. This is how cetaceans can be identified at a distance by experienced whalers or whale-watchers. The cetacean’s eyes are set well back and to either side of its huge head. This means that cetaceans with pointed ‘beaks’ (such as dolphins) have good binocular vision forward and downward, but others with blunt heads (such as the Sperm Whale) can see either side but not directly ahead or directly behind. The eyes shed greasy tears which protect them from the salt in the water, and cetaceans have been found to have good vision both in the water and out.
Akin to the eyes, the cetacean’s ears are also small. Life in the sea accounts for the cetacean’s loss of its external ears, whose function is to collect sound waves and focus them in order for them to become strong enough to hear well. However, sound waves travel faster through the water than in the air, and so the external ear was no longer needed, and is no more than a tiny hole in the skin, just behind the eye. The inner ear, however, has become so well developed that the cetacean can not only hear sounds tens of miles away, but it can also discern from which direction the sound comes.
Cetaceans use sound in the same way as bats – they emit a sound, which then bounces off an object and returns to them. From this, cetaceans can discern the size, shape, surface characteristics and movement of the object, as well as how far away it is. This is called sonar, or echolocation, and with it cetaceans can search for, chase and catch fast-swimming prey in total darkness. It is so advanced that most cetaceans can discern between prey and non-prey (such as humans or boats), and captive cetaceans can be trained to distinguish between, for example, balls of different colours, sizes or shapes. Cetaceans also use sound to communicate, whether it be groans, moans, whistles, clicks or the complex ‘singing’ of the Humpback Whale that is becoming so popular on wildlife documentaries and relaxation tapes.
When it comes to food and feeding, this is where cetaceans can be separated into two distinct groups. The ‘toothed whales’ or Odontoceti have lots of teeth that they use for catching fish, squid or other marine life. They do not chew their food, but swallow it whole. The cetaceans in this group include the Sperm Whale, dolphins and porpoises.
The ‘baleen whales’ or Mysticeti do not have teeth. Instead they have plates made of keratin (the same substance as our fingernails) which hang down from the upper jaw. These plates act like a giant filter, straining small animals (such as plankton, krill and fish) from the seawater. Cetaceans included in this group include the mighty Blue Whale, the Humpback Whale, the Bowhead Whale and the Minke Whale.Bibliography:NONEwww.catecea.com |
Talking to children about food gives you insight on a child’s eating habits and how parents are enforcing positive eating habits on their child. You can create a questionnaire that asks children about their eating habits, choices and preferences, giving you an opportunity to support those eating options or provide new and healthier suggestions to improve a child’s health.
Family meals offer significant benefits to a child’s eating habits, increasing the range of food groups offered to children, their exposure to fruits and vegetables and can allow kids to become involved in the planning for meals. Include a section on your questionnaire that asks kids how many days, during an average week, they sit down together with their family to eat a meal. Ask them if they have an opportunity to contribute to meal planning and if so, what suggestions do they normally make. Additionally, you can ask them to describe a normal meal to get an idea of the kinds of foods they generally eat at their family meals.
Fast food does not have to be as unhealthy as it once was if customers are careful in their quantity and selection. Include a section on your questionnaire regarding the child’s exposure to fast food and her choices when she does eat fast food. Ask childen where they usually go for their fast food and the specific things they eat. Question them about why they like that restaurant over another option and also ask them how much food they eat when they eat at a fast food restaurant.
Snack food options are essential to a growing child, and much like fast food choices, they can be healthy or unhealthy -- depending on the quantity and specific food they choose. Determine if a child makes good snack choices, or has good snack choices made for him, by asking him what foods he generally eats during snack times. Ask him if he has snack foods available to him around the clock or if he has to wait for specific times to eat his snacks.
Children make food choices based on personal likes and dislikes instead of on the specific advantages of one food over another. Determine if the child has healthy preferences when it comes to her daily eating by asking her to name her favourite foods. Prepare other questions that ask children about their taste choices between healthy and semi-healthy food options. Try to discern if the child has healthy food preferences that you may not be aware of, allowing you to replace other unhealthy choices with more healthy selections while accommodating a child’s tastes. |
Spotting and correcting grammar mistakes is a significant part of ACT English. Questions that test your ability to properly use singular and plural forms are especially common in the English section. You’ll often be presented with mistakes related to singular/plural word form errors, either in the text or in the answer choices. There are a few rules you should be aware of as you look for these mistakes and their proper corrections on the exam.
Plural Nouns Have Plural Verbs While Singular Nouns Have Singular Verbs
This is a rule that ACT English loves to break in its prompts. You need to pay close attention to plural forms in the readings and questions.
Suppose the subject of a sentence is a plural noun phrase, such as “my mother and father.” In that case, the verb that describes the actions of “mother and father” must also be plural. A common mistake you’ll see in an ACT English passage might read “My mother and father was enthusiastic about the school project as soon as I mentioned it to them.” Do you see the problem with was? It should be were. That’s the correct plural past tense form of to be. An improper use of a plural verb might appear in a follow-up sentence such as “To this day, my mother are proud of the work we did together on the school project.” Are would need to be changed to is, the present-tense verb that matches the singular subject mother.
If There Is A Modal Before The Verb, The Verb Will Not Have A Singular Or Plural Form
This rule is easy to forget and ACT English questions with (Reviewing Common Modals in Act English, Part 1) modals (Reviewing Common Modals in Act English, Part 2) often trick students with this rule. So remember: if there is a modal word before a verb, the verb will appear in its bare form, without any “s” added at the end to mark the verb as singular. This is also true even when the subject of a sentence is singular.
Let’s look at an example with the modal might. It’s perfectly grammatical to say “they might do that” or “she might do that.” The bare verb form do is used with both a plural subject like they and a singular subject like she if a modal is present. Remove the modal, however, and you’ll still need to follow the rule for singular verbs that go with singular subjects. Without the modal, these two statements have different verbs form, with “they do that” and “she does that.”
Plural Nouns Should Be Followed By Plural Pronouns, Singular Nouns Should Be Followed By Singular Pronouns
Pronouns must have the same plural or singular nature as the nouns they refer to. You wouldn’t say “The last two U.S. presidents served two terms, and he each had a total of eight years in office.” You would of course have to say that they each served for 8 years. Also, you wouldn’t say “The badger is a common animal in North America and they dig burrows into the ground.” You’d say that it digs holes, because you used a singular subject the badger.
On the surface this rule seems more straightforward than the rules for singular verbs. There are no exceptions of any sort to this pronoun referent rule. However, this rule is one of the easiest to forget as you read ACT English passages, and mistakes in singular and plural pronouns are some of the hardest to spot. This is because ACT English won’t necessarily present a noun and its pronoun in the same sentences, as seen in my examples above. Instead, you’re more likely to see the mistakes presented in longer segments of writing, like these:
- The last two U.S. presidents, George W. Bush and Barack Obama, were both re-elected, serving two full terms in office. A single U.S. presidential term is four years long. This means he had a total of eight years each as president.
- The badger is a species of animal common in North America. Digging burrows into the soft soil of the American Midwestern landscape, they live underground.
In the complex academic passage on the ACT, pronouns almost never appear in the same sentence as the nouns they reference. As a result, it’s easy for your sense of singular and plural to get “lost” in the writing maze.
To make sure you can keep track of the different singular and plural forms on the ACT, study these rules carefully. Recognizing these rules and understanding when they’ve been broken can really help your score on the ACT. |
an Absolute vs a Relative URL
Uniform Resource Locator (URL) is an address that specifies where a particular document or a resource is located on the World Wide Web (WWW). Best example of a URL is the address of a web page on the WWW such as http://www.cnn.com/. Absolute URL, also called an absolute link is a complete internet address that takes a user to the exact directory or file of a website. A Relative URL or a partial internet address, points to a directory or a file relative to the current directory or a file.
What is Absolute URL?
Absolute URL, which provides a complete address of a web page or a resource on the WWW, generally has the format given bellow.
Usually, the Hyper Text Transfer Protocol (http://) is used as the protocol section. But the protocol also could be ftp://, gopher://, or file://. The hostname is the name of the computer that the resource is residing. For example, the hostname of the CNN’s central web server is www.cnn.com. The other_details section includes information about directory and the file name. The exact meaning of the other_details section depends on both the protocol and host. The resource that is pointed by the absolute URL is normally resided on a file, but it can also be generated on the fly.
What is Relative URL?
As mentioned earlier, a relative URL points to a resource relative to the current directory or file. A relative URL can take several different forms. When referring to a file that resides in the same directory as the currently referred page, the relative URL can be simple as the name of the file itself. As an example, if you need to create a link in your home page to a file called my_name.html, which resides in the same directory as your home page, you can simply use the file name as follows:
<a href=”my_name.html”>My name</a>
If the file you need to link is within a sub directory of the referring page’s directory, you need to include the subdirectory name and the file name in the relative URL. For example if we are trying to link a file my_parents.html that is within a directory called parents, which actually resides inside the directory that contains your home page, the relative URL will look like the following.
<a href=”parents/my_parents.html”>My Parents</a>
Additionally, if you want to refer to a resource that resides on a directory that is in a higher level in the directory structure than the directory that contains the referring page, you can use two consecutive dots. For example, if you want to refer to a file called home.html that in a directory above your home page, you can use a relative URL as follows.
Difference between Absolute URL and Relative URL
The main difference between an absolute URL and a relative URL is that, an absolute URL is a complete address that points to a file or a resource, while a relative URL points to a file relative to the current directory or file. Absolute URL contains more information than a relative URL, but using relative URLs are much easier since they are shorter and more portable. But relative URLs can only be used to refer to links that reside on the same server as the page that refers them. |
Graphite has a wide variety of almost contradictory uses. An allotrope of carbon and one of the world’s softest minerals, its uses range from writing implements to lubricants. It can be made into a one-atom-thick cylinder of graphene that is a super-strength material used in sports equipment. Graphite can behave like a metal and conduct electricity but also as a nonmetal that resists high temperatures.
Graphite occurs naturally as flakes and veins within rock fractures or as amorphous lumps. The basic crystalline structure of graphite is a flat sheet of strongly bonded carbon atoms in hexagonal cells. Called graphenes, these sheets stack above each other to create volume, but the vertical bonds between the sheets are very weak. The weakness of these vertical bonds enables the sheets to cleave and slide over one another. However, if a graphene sheet is aligned and rolled horizontally, the resultant material is 100 times stronger than steel.
Writing and Artists' Materials
“Lead” pencil cores are made of a mixture of clay and graphite. Loosely cleaved graphite flakes mark the paper, and the clay acts as a binding material. The higher the graphite content of the core, the softer the pencil and the darker its trace. There is no lead in what are known as lead pencils. The name originated in Europe when graphite was called “plumbago” or “black lead” because of its metallic appearance. Graphite’s use as a marker dates from the 16th century in northern England, where local legend states that shepherds used a newly discovered graphite deposit to mark sheep.
Lubricants and Refractories
Graphite reacts with atmospheric water vapor to deposit a thin film over any adjacent surfaces and reduces the friction between them. It forms a suspension in oil and lowers friction between two moving parts. Graphite works in this way as a lubricant up to a temperature of 787 degrees Celsius (1,450 degrees Fahrenheit) and as an anti-seize material at up to 1,315 degrees Celsius (2,399 degrees Fahrenheit). Graphite is a common refractory material because it withstands high temperatures without changing chemically. It is used in manufacturing processes ranging from steel and glass making to iron processing. It is also an asbestos substitute in automobile brake linings.
Lithium-ion batteries have a lithium cathode and a graphite anode. As the battery charges, positively charged lithium ions in the electrolyte – a lithium salt solution – accumulate around the graphite anode. A lithium anode would make a more powerful battery, but lithium expands considerably when charged. Over time, the lithium cathode’s surface becomes cracked, causing lithium ions to escape. These in turn form growths called dendrites in a process that can short circuit the battery.
Rolled single graphene sheets are 10 times lighter, as well as 100 times stronger, than steel. Such a rolled sheet is also referred to as graphene, and this derivative of graphite is the world’s strongest identified material and has been used to make super-strength, lightweight sports equipment. Its high electrical conductivity, low light absorbance and chemical resistance make it an ideal material for future applications, including in medical implants such as artificial hearts, flexible electronic devices and aircraft parts. |
"This is the first experiment where we can directly see local structures in three dimensions at atomic-scale resolution — that's never been done before," said Jianwei (John) Miao, a professor of physics and astronomy and a researcher with the California NanoSystems Institute (CNSI) at UCLA.
Miao and his colleagues used a scanning transmission electron microscope to sweep a narrow beam of high-energy electrons over a tiny gold particle only 10 nanometers in diameter (almost 1,000 times smaller than a red blood cell). The nanoparticle contained tens of thousands of individual gold atoms, each about a million times smaller than the width of a human hair. These atoms interact with the electrons passing through the sample, casting shadows that hold information about the nanoparticle's interior structure onto a detector below the microscope.
Miao's team discovered that by taking measurements at 69 different angles, they could combine the data gleaned from each individual shadow into a 3-D reconstruction of the interior of the nanoparticle. Using this method, which is known as electron tomography, Miao's team was able to directly see individual atoms and how they were positioned inside the specific gold nanoparticle.
Presently, X-ray crystallography is the primary method for visualizing 3-D molecular structures at atomic resolutions. However, this method involves measuring many nearly identical samples and averaging the results. X-ray crystallography typically takes an average across trillions of molecules, which causes some information to get lost in the process, Miao said.
"It is like averaging together everyone on Earth to get an idea of what a human being looks like — you completely miss the unique characteristics of each individual," he said.
X-ray crystallography is a powerful technique for revealing the structure of perfect crystals, which are materials with an unbroken honeycomb of perfectly spaced atoms lined up as neatly as books on a shelf. Yet most structures existing in nature are non-crystalline, with structures far less ordered than their crystalline counterparts — picture a rock concert mosh pit rather than soldiers on parade.
"Our current technology is mainly based on crystal structures because we have ways to analyze them," Miao said. "But for non-crystalline structures, no direct experiments have seen atomic structures in three dimensions before."
Probing non-crystalline materials is important because even small variations in structure can greatly alter the electronic properties of a material, Miao noted. The ability to closely examine the inside of a semiconductor, for example, might reveal hidden internal flaws that could affect its performance.
"The three-dimensional atomic resolution of non-crystalline structures remains a major unresolved problem in the physical sciences," he said.
Miao and his colleagues haven't quite cracked the non-crystalline conundrum, but they have shown they can image a structure that isn't perfectly crystalline at a resolution of 2.4 angstroms (the average size of a gold atom is 2.8 angstroms). The gold nanoparticle they measured for their paper turned out to be composed of several different crystal grains, each forming a puzzle piece with atoms aligned in subtly different patterns. A nanostructure with hidden crystalline segments and boundaries inside will behave differently from one made of a single continuous crystal — but other techniques would have been unable to visualize them in three dimensions, Miao said.
Miao's team also found that the small golden blob they studied was in fact shaped like a multi-faceted gem, though slightly squashed on one side from resting on a flat stage inside the gigantic microscope — another small detail that might have been averaged away when using more traditional methods.
This project was inspired by Miao's earlier research, which involved finding ways to minimize the radiation dose administered to patients during CT scans. During a scan, patients must be X-rayed at a variety of angles, and those measurements are combined to give doctors a picture of what's inside the body. Miao found a mathematically more efficient way to obtain similar high-resolution images while taking scans at fewer angles. He later realized that this discovery could benefit scientists probing the insides of nanostructures, not just doctors on the lookout for tumors or fractures.
Nanostructures, like patients, can be damaged if too many scans are administered. A constant bombardment of high-energy electrons can cause the atoms in nanoparticles to be rearranged and the particle itself to change shape. By bringing his medical discovery to his work in materials science and nanoscience, Miao was able to invent a new way to peer inside the field's tiniest structures.
The discovery made by Miao's team may lead to improvements in resolution and image quality for tomography research across many fields, including the study of biological samples.
This research was conducted at CNSI's Electron Imaging Center for NanoMachines and funded by UC Discovery/Tomosoft Technologies. Tomosoft Technologies is a start-up company based on Miao's work.
Other UCLA co-authors included Chris Regan, an assistant professor of physics and astronomy and a CNSI researcher; graduate students Mary Scott, Chien-Chun Chen, Matthew Mecklenburg and Chun Zhu; and postdoctoral scholar Rui Xu. In particular, Chen and Scott played an important role in this work. Peter Ercius and Ulrich Dahmen from the National Center for Electron Microscopy at Lawrence Berkeley National Laboratory are also co-authors.
UCLA is California's largest university, with an enrollment of nearly 38,000 undergraduate and graduate students. The UCLA College of Letters and Science and the university's 11 professional schools feature renowned faculty and offer 337 degree programs and majors. UCLA is a national and international leader in the breadth and quality of its academic, research, health care, cultural, continuing education and athletic programs. Six alumni and five faculty have been awarded the Nobel Prize.
For more news, visit the UCLA Newsroom and follow us on Twitter.
Kim DeRose | EurekAlert!
KIST researchers develop high-capacity EV battery materials that double driving range
24.02.2020 | National Research Council of Science & Technology
OrganoPor: Bio-Based Boards for A Thermal Insulation Composite System
21.02.2020 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF
Researchers at the University of Bayreuth have discovered an unusual material: When cooled down to two degrees Celsius, its crystal structure and electronic properties change abruptly and significantly. In this new state, the distances between iron atoms can be tailored with the help of light beams. This opens up intriguing possibilities for application in the field of information technology. The scientists have presented their discovery in the journal "Angewandte Chemie - International Edition". The new findings are the result of close cooperation with partnering facilities in Augsburg, Dresden, Hamburg, and Moscow.
The material is an unusual form of iron oxide with the formula Fe₅O₆. The researchers produced it at a pressure of 15 gigapascals in a high-pressure laboratory...
Study by Mainz physicists indicates that the next generation of neutrino experiments may well find the answer to one of the most pressing issues in neutrino physics
Among the most exciting challenges in modern physics is the identification of the neutrino mass ordering. Physicists from the Cluster of Excellence PRISMA+ at...
Fraunhofer researchers are investigating the potential of microimplants to stimulate nerve cells and treat chronic conditions like asthma, diabetes, or Parkinson’s disease. Find out what makes this form of treatment so appealing and which challenges the researchers still have to master.
A study by the Robert Koch Institute has found that one in four women will suffer from weak bladders at some point in their lives. Treatments of this condition...
The operational speed of semiconductors in various electronic and optoelectronic devices is limited to several gigahertz (a billion oscillations per second). This constrains the upper limit of the operational speed of computing. Now researchers from the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg, Germany, and the Indian Institute of Technology in Bombay have explained how these processes can be sped up through the use of light waves and defected solid materials.
Light waves perform several hundred trillion oscillations per second. Hence, it is natural to envision employing light oscillations to drive the electronic...
Most natural and artificial surfaces are rough: metals and even glasses that appear smooth to the naked eye can look like jagged mountain ranges under the microscope. There is currently no uniform theory about the origin of this roughness despite it being observed on all scales, from the atomic to the tectonic. Scientists suspect that the rough surface is formed by irreversible plastic deformation that occurs in many processes of mechanical machining of components such as milling.
Prof. Dr. Lars Pastewka from the Simulation group at the Department of Microsystems Engineering at the University of Freiburg and his team have simulated such...
12.02.2020 | Event News
16.01.2020 | Event News
15.01.2020 | Event News
27.02.2020 | Life Sciences
27.02.2020 | Health and Medicine
27.02.2020 | Physics and Astronomy |
Since the Industrial Revolution, when factories and mines first started spewing mercury into the environment, the amount of the heavy metal in the ocean has tripled. Some lakes and rivers are equally polluted. But researchers have discovered a cheap way that we might be able to start sucking mercury back out of the water.
A new type of rubber made from a combination of industrial waste and food scraps quickly absorbs mercury. Because it’s made from trash–a material found in orange peels called limonene, and sulfur waste from the petroleum industry–the material is very low cost to make.
Sulfur was already known for its ability to suck up mercury. But adding limonene to the mix makes the material flexible, so it can be molded into different shapes or used as a coating inside pipes or in a water filter. For large-scale remediation, water could flow over a bed made of the material.
It also can serve as a low-cost test to show if water is polluted. “The real surprise was that the material changes color when it binds to mercury,” says Justin Chalker, who developed the rubber at his lab at Flinders University in Australia.
Once the rubber is soaked with mercury, it could be replaced or potentially cleaned up and reused. “We are currently working on methods to recover the metal and reuse the polymer,” he says.
The researchers are already testing the material at old mines, a hotspot for mercury pollution. Here’s hoping it can eventually be used in the ocean, where over 80% of fish now contain unsafe levels of the toxin. |
On Jan 15, 1997 ZetaTalk stated that Flashes associated with booms were due to Methane gas released during underwater
or landbased quakes. On March 22, 1999 documentation on the web on Earthquake Lights confirmed this.
The first recorded sighting of earthquake lights dates back to 373 BC in Greece, but stories have long been told of strange lights in the skies before, during and after an earthquake. Today their existence is an accepted fact, although the mechanism that generates them is still a mystery. The first known scientific investigation of earthquake lights took place in the 1930s, and in the 1960s earthquake lights were well documented in a series of photographs taken in Japan. Japanese earthquake light photos: Steinbrugge Collection, Earthquake Engineering Research Center, University of California, Berkeley
- Ball of light
- Horizon light
- B & W horizon light
Eyewitness descriptions: The lights are most evident in the middle of a quake. People who have seen them sometimes describe them as searchlights and sometimes as fireballs or lightning. Other witnesses describe them as consisting of beams and columns of light, and still others report clouds that were illuminated during earthquakes or simply an eerie glow in the sky. In an article in Nature titled "Earthquake Lights and Seismicity," Marcel Ouellet described the lights that appeared during a three month period from November, 1988 through January, 1989, during a series of seismic shocks that occurred in the Saguenay region of Quebec, Canada: Fireballs a few metres in diameter often popped out of the ground in a repetitive manner at distances of up to only a few metres away from the observers. Others were seen several hundred metres up in the sky, stationary or moving. Some observers described dripping luminescent droplets, rapidly disappearing a few metres under the stationary fireballs. Only two fire-tongues on the ground were reported, one on snow and the other on a paved parking space without any apparent surface fissure. The colours most often identified were orange, yellow, white and green. Some luminosities lasted up to 12 min. Flashes of light were widely reported before the 1995 earthquake at Kobe, Japan:
- Some residents of Kobe and nearby cities saw aurora-like phenomena in the sky just before and after the quake.
- A Kobe firefighter observed a bluish-orange light above a shaking road that lasted about 4 seconds.
- A hotel employee on his way to work on Rokko mountain: "saw a flash running from east to west about two to three meters above the ground shortly after the quake. The orange flash was framed in white."
- Flashes of light were widely observed.
(Science Frontiers #99, MAY-JUN 1995. © 1997 William R. Corliss) One of the strongest earthquake illuminations came during Chinese earthquake of 1976, when it was reported that the lights at the centre of the earthquake were bright enough to turn night into day. As far as 320 kilometres from the epicenter of the quake people woke up thinking their room lights had been turned on. Just last year, on June 4, 1998, residents of the Charlotte, North Carolina area reported seeing a bright flash when the area north of Charlotte was hit by a 3.2 magnitude earthquake. Oddly, the National Earthquake Center denied that the North Carolina flash would have been attributable to the 3.2 quake on June 4. The flash was said to have been so bright that area residents at first thought that a meteor had struck the ground.
What causes earthquake lights?
The most common explanation for earthquake lights is the piezoelectric effect in quartz-bearing rock. Quartz has the unique attribute of emitting electricity under pressure. Laboratory experiments have shown that this effect can produce light emissions, but they are, at least in the laboratory, of much shorter duration than reported earthquake lights. Some researchers theorize that earthquake lights are produced by seismic stresses that may generate high voltages that create small masses of ionized gas, which are then released into the air near the fault line. A second popular theory is that, during an earthquake, small pockets of trapped natural gas are released and ignited by friction. These burning balls of gas then rise in the air and create the effect of the lights. Another theory is that the pressure generated during earthquakes may cause water molecules to separate into atoms of hydrogen and oxygen, then quickly recombine back to water. In the process they theoretically could release light and create the mysterious earthquake lights. |
What is Treaty Day?
In Western Canada, Treaty Day also commemorates the annual meeting at which representatives of the federal government distributed treaty payments to members of Indigenous bands who signed the Numbered Treaties. The first of these annual payments was in 1872 and they are still distributed to this day, although now they are mostly a symbolic gesture. Most descendants of the Numbered Treaties signatories receive $4 or $5 annually — an amount that has not increased over time to reflect inflation. Therefore, these minimal funds are financially insignificant but they confirm the relationship between Indigenous peoples and the government.
In Nova Scotia, Treaty Day has a slightly different meaning. There, Treaty Day commemorates the signing and significance of the Peace and Friendship Treaties of the 1700s (see Indigenous Peoples: Treaties). These treaties brought an end to years of warfare between the Mi’kmaq and the British. Although there is no distribution of annual payments, Treaty Day in Nova Scotia commemorates the Indigenous rights of the Mi’kmaq people and their relationship to the Crown.
Despite these differences between celebrations across Canada, Treaty Days are annual celebrations where the descendants of Indigenous signatories honour these historic commitments.
When is Treaty Day?
Treaty Day is celebrated on different days throughout Canada, depending on the particular treaty that is being commemorated. For example, 1 October 1986 was proclaimed as Treaty Day in Nova Scotia and since that time has been celebrated annually to recognize the connection between the Crown and the Mi’kmaq, and to commemorate the Peace and Friendship Treaties. In the West, Treaty Day or Treaty Days (also known as Urban Treaty Days, which generally applies to all Indigenous nations in the region regardless of treaty affiliation) is celebrated in the summer months in Saskatchewan, Manitoba and Alberta. For the Indigenous peoples of the West, Treaty Day commemorates the treaties between the federal government and Indigenous nations.
Aside from Nova Scotia, provinces and territories do not have a specific day on which to celebrate the signing of historic treaties. However, Indigenous nations that were promised annual payments in the Numbered Treaties and in the Robinson-Huron and Robinson-Superior treaties still receive those payments from the federal government. Oftentimes, these cash payments are made in person, directly from a representative of the federal government.
Lack of a provincially recognized Treaty Day also does not prohibit Indigenous peoples from commemorating historic treaties in their own communities. For instance, Mi’kmaq and Wolastoqiyik (Maliseet) communities in New Brunswick, Prince Edward Island, Québec and Newfoundland and Labrador celebrate the Peace and Friendship Treaties. The peoples of Treaties 8 and 11 (covering parts of the present-day Northwest Territories, Yukon, British Columbia, Alberta and Saskatchewan) have been holding annual celebrations of treaty payments since the treaties were signed in the summers of 1899–1900 and 1921, respectively. The anniversaries of modern treaties, such as the 1975 James Bay and Northern Québec Agreement, are also celebrated within specific communities.
In some cases, efforts have been made on the part of both Indigenous peoples and governments to bring greater awareness to treaty history and treaty peoples. In many parts of Canada, historic treaties are celebrated as part of National Aboriginal Day on 21 June every year. In 2016, as part of Ontario’s commitment to reconciliation with Indigenous peoples (see Truth and Reconciliation Commission), the province declared the first week of November as Treaties Recognition Week.
How is it Celebrated?
In Nova Scotia, Treaty Day is celebrated with powwows, dances, parades, concerts, speeches, feasts and other social and cultural gatherings. The same is true in Western Canada, although they also celebrate Treaty Day with the distribution of annual payments. The amount of the payment depends on the agreement stipulated in the treaty. These cash payments can be distributed on or off reserve. In the past, government officials would distribute food, ammunition, clothing, hunting and fishing equipment and other goods, in addition to monies.
Although many treaty promises have remained unfulfilled by the federal government, Indigenous communities still believe in the importance of commemorating the historic relationship between treaty peoples and the federal government. Today, some dignitaries wear replica treaty medals to the festivities as a way of honouring these binding commitments. Treaty Day also brings families and communities together in celebration of Indigenous culture and heritage, while reaffirming Indigenous and treaty rights (see Rights of Indigenous Peoples). |
We’ve all heard the old assertion that no two snowflakes are alike. But is this statement really accurate? Since we haven’t seen every single snowflake, how can we know whether any two can be the same? Joe Hanson of PBS Digital Studio’s ” It’s Okay To Be Smart“has the answer.
Snowflakes are formed from frozen water during extremely cold weather (it has to be colder than 23 degrees Fahrenheit or -5 Celsius). The initial formation of the flakes takes place tn the atmosphere, where water mixes with dust particles to create ice crystals. As the crystals fall to the ground, more water freezes to the initial ice crystals, filling in the spaces available to make the characteristic six sided snowflake. The shape of the snowflake (and the pattern) depends on the atmosphere conditions that the flake encounters as it falls (things like temperature and humidity). As you may be aware, the ice crystals that make up snowflakes are symmetrical (patterned). This is because the ice crystals that form reflect the internal order of the water molecules as they arrange themselves in predetermined spaces (known as “crystallization”).
Each snowflake is comprised of some quintillion water molecules. This means that there is a (nearly) endless number of combinations. Hanson explains, “The chance of two snowflakes being alike on the atomic level, even factoring in deuterium, is so infinitesimally small that we may as well call it zero,” Hanson said in an email to HuffPost Science. “The number of possible arrangements of the 10^18 water molecules [in a snowflake] is such a large number that it dwarfs the number of atoms in the universe many, many times over. Somewhere, a supercomputer is weeping just thinking about having to calculate a number that large.” |
In this lesson, you will learn how to use thermistor. Thermistor can be used as electronic circuit components for temperature compensation of instrument circuits. In the current meter, flowmeter, gas analyzer, and other devices. It can also be used for overheating protection, contactless relay, constant temperature, automatic gain control, motor start, time delay, color TV automatic degaussing, fire alarm and temperature compensation.
In this example, we use the analog pin A0 to get the value of Thermistor. One pin of thermistor is connected to 5V, and the other is wired up to A0. At the same time, a 10kΩ resistor is connected with the other pin before connecting to GND.
You can open the file
2.27_thermistor.inounder the path of
Or copy this code into Arduino IDE.
After uploading the code to the Mega2560 board, you can open the serial monitor to check the current temperature. The Kelvin temperature is calculated according to the formula TK=1/(ln(RT/RN)/B+1/TN). |
- What is Sin Cos equal to?
- What is cos in math?
- Does it matter if I use sine or cosine?
- Why Sine is called sine?
- Why do we use sine?
- How do you go from sin to cos?
- How is trigonometry used in real life?
- Why are sine and cosine functions important?
- Why do we need cosine?
- How sine and cosine is invented?
- Who is father of trigonometry?
What is Sin Cos equal to?
Sine, Cosine and TangentSine Function:sin(θ) = Opposite / HypotenuseCosine Function:cos(θ) = Adjacent / HypotenuseTangent Function:tan(θ) = Opposite / Adjacent.
What is cos in math?
In a right angled triangle, the cosine of an angle is: The length of the adjacent side divided by the length of the hypotenuse. The abbreviation is cos. cos(θ) = adjacent / hypotenuse.
Does it matter if I use sine or cosine?
The sine rule is used when we are given either a) two angles and one side, or b) two sides and a non-included angle. The cosine rule is used when we are given either a) three sides or b) two sides and the included angle.
Why Sine is called sine?
The word “sine” (Latin “sinus”) comes from a Latin mistranslation by Robert of Chester of the Arabic jiba, which is a transliteration of the Sanskrit word for half the chord, jya-ardha.
Why do we use sine?
The sine function is defined as the ratio of the side of the triangle opposite the angle divided by the hypotenuse. This ratio can be used to solve problems involving distance or height, or if you need to know an angle measure. Example: … To find the length of the side opposite the angle, d, we use the sine function.
How do you go from sin to cos?
The Basic Two: Sine and Cosine(1) Memorize: sine = (opposite side) / hypotenuse. … (2) sin A = cos(90° − A) or cos(π/2 − A) cos A = sin(90° − A) or sin(π/2 − A)(3) Memorize: … (4) tangent = (opposite side) / (adjacent side)(5) Memorize: … (6) tan A = cot(90° − A) or cot(π/2 − A) … (7) sec A = csc(90° − A) or csc(π/2 − A)
How is trigonometry used in real life?
Trigonometry can be used to roof a house, to make the roof inclined ( in the case of single individual bungalows) and the height of the roof in buildings etc. It is used naval and aviation industries. It is used in cartography (creation of maps). Also trigonometry has its applications in satellite systems.
Why are sine and cosine functions important?
Because these functions turn up a lot in nature and they also are useful tools in mathematics. The sine and cosine functions are almost the same – they are just shifted a little bit compared to each other. They come up a lot in systems that have circular motion. … The sine function is needed to describe waves.
Why do we need cosine?
The Law of Cosines is useful for finding: the third side of a triangle when we know two sides and the angle between them (like the example above) the angles of a triangle when we know all three sides (as in the following example)
How sine and cosine is invented?
In the early 9th century AD, Muhammad ibn Mūsā al-Khwārizmī produced accurate sine and cosine tables, and the first table of tangents. He was also a pioneer in spherical trigonometry. In 830 AD, Habash al-Hasib al-Marwazi produced the first table of cotangents.
Who is father of trigonometry?
HipparchusThe first trigonometric table was apparently compiled by Hipparchus, who is consequently now known as “the father of trigonometry”. |
A physical basis for the sharp bend that characterizes the geometry of the Hawaiian-Emperor hotspot track in the north Pacific Ocean is presented in a study published in Nature this week. Volcanoes that form away from tectonic plate boundaries are thought to be the surface manifestation of mantle plumes - hot magma rising from deep within the Earth. As tectonic plates move over these ‘hotspots’, new volcanoes progressively form.
Several mechanisms have been proposed to explain the peculiar bend in the Hawaiian-Emperor hotspot track in the north Pacific, but the underlying mantle flow dynamics have remained enigmatic.
Dietmar Muller and colleagues followed the trajectory and tilt of mantle plumes over the past 230 million years using numerical models of mantle convection. They find that flow in the deep lower mantle under the north Pacific was anomalously vigorous between 100 and 50 million years ago and that this occurred as a consequence of long-lasting subduction systems (whereby one plate dives below another plate). Their models suggest that the sharp bend in the Hawaiian-Emperor hotspot track can be explained by the interplay between plume tilt and the lateral migration of plume sources. |
The Indefication of Eastern Subterranean Termites
Eastern subterranean termites (Reticulitermes flavipes) are one of many species of termites that are known today.
Eastern subterranean termites are found in tropical and subtropical regions such as deserts and rain forests, as well as in the USA, Australia and Asia.
- How Do Eastern Subterranean Termites Differ from Other Termites?
- Eastern Subterranean Termites Life-Cycle and Hierarchy
- Eating Habits and Behavior
- Signs of Subterranean Termites Infestation
- Eastern Subterranean Termites Treatment
- Non-Chemical Termite Control Methods
- Preventive Measures
- Useful articles
- Helpful video
How Do Eastern Subterranean Termites Differ from Other Termites?
Soldiers. Their bodies are of creamy-white color; heads are slightly brownish. Soldiers have no wings and have large jaws.
Workers. They have creamy colored bodies with no wings. Their size is about ¼ inch or less in length.Alates (eastern subterranean termite swarmers). The body is dark (brown or black), the size is approximately ¼ to ½ inch long with two pairs of wings.
Eastern Subterranean Termites Life-Cycle and Hierarchy
Their life-cycle consists of 6 stages:
- Immature (nymphs).
- Soldiers, workers or developing winged forms.
- Alates (swarmers).
- Male and female with wings dropped.
- Kings, queens.
Eastern subterranean termites are a highly-organized eusocial insects that have well-established hierarchy within the colony.
The average nest is from 10,000 to 1 million species.
Eastern subterranean termite classification:
The king and the queen lives in separate chamber and are tended by the workers. They perform the reproductive function.
Immature (nymphs). As the nymphs become larger, they also begin to damage wood. Workers and soldiers take care of them until they mature.
Reproductive caste (swarmers, “alates”). The flying stage that can form a new separate colony. Some alates do not have wings and stay in the colony as the replacement in case the queen or the king dies.
Worker caste (the largest caste). Make the most damage to timber and wooden buildings. They perform all the work in the nest. They gather food, take care for young nymphs until they become mature.Soldier caste. They are the defenders of the colony, their principal enemy – marauding ants.
What do eastern subterranean termites look like, pictures below:
Eating Habits and Behavior
Eastern subterranean termite facts. They have well-established social system within their colony, with perfect engineering capabilities and acute survival instinct.These termites obtain moisture from the decaying timber and soil and communicate with the help of pheromone signals.
These termites eat wood. These termites are the cause of decomposition of old branches, trees, stumps. The nutrients of these old structures can go back into the soil to be used by new plants as fertilizer.
These termites have an important organism living in their stomach (Tricohonympha agilis), the fungi that help termites to digest timber. These fungi do not harm the termite, and the termite would die without it.
A new colony forms when a Queen lays the eggs. The first eggs will hatch into the worker caste. Later, when there are enough workers, the Queen lays the eggs that will hatch into the soldiers that will protect the nest in the future.
Later on, the Queen will lay the eggs that will hatch into the reproductives.
Some reproductives do not have wings and stay in the colony as the replacement in case the queen or the king dies.
Though, most of them will grow into alates. When alates are adult, they will leave the colony to mate and form a new colony.
In spring and in the summer one can see a huge colonies of flying “ants”. They are the mating alates.Warning! This is a real danger sign that a large mature termite forms a nest near your house. Such a nest can contain hundreds of thousands subterranean termites within range of infesting the timber of your house.
After the alates mated, they will break off their wings, burrow into the soil and start a new colony. The pair will become the King and the Queen of the new colony.
Signs of Subterranean Termites Infestation
There are various sure signs of infestation that could be find during the inspection of the given areas (advice: for eastern subterranean termite identification you need a good flashlight, screwdriver or pocketknife and coveralls.)
Termites frass (droppings). The frass is just like a saw dust and looks like a small brown hillock. The termite frass is not dangerous for people and animals, but, all the same, it can cause various allergic reactions.
The existence of termite droppings indicates that there is a sound termite activity in the surrounding area. If the frass are on the beds, the termites are active on the attic.
Droppings on window sills say about termite activity inside the wooden window frame. When you wind frass on the floor it can be a termite activity in the basement or behind the walls. Carefully inspect all the corners, underneath the wall bases.
Swarmers. They are called “flying ants” and are often attracted by the light sources. Most swarmers emerge during the day, most frequently on warm days after rain.
The presence of “flying ants” may indicate, that the colony nest is near your dwelling.
Also, it is possible to see swarmers on the soil surface in spring or in the summer immediately after the rain.
Damaged or hollow-sounding wood. Termites can destroy wood material from the inside, at the same time the wood external appearance stays as usual.
If wood sounds hollow when tapped, it may be because termites are eating the wood from the inside out. Also, damaged wood when opened looks layered and rotten with a great number of frass.
Eastern Subterranean Termites Treatment
Nowadays there are various means of Subterranean termite control, most of them are highly available and quite budget-friendly:Termicides – very effective means of termite treatment. There are various types of termicide termite treatment:
Subterranean Termite Treatment Drilling – the principal aim of such treatment is to put a chemical blanket between the termites in the soil and the structure above. In many cases, affected termites will die of dehydration.
Repellent Termiticides: pyrethroids – a type of chemical compound – are fast acting nerve poisons that are highly toxic to termites but have low toxicity to mammals.
Non-Repellent Termiticides or Soil Treatment – these chemicals are not repellant and termites cannot detect them inside the ground.
The other broad treatment category of termite treatment is baiting. Termite baits consist of paper, cardboard, or other palatable food, combined with a slow-acting substance lethal to termites.
Another effective way to get rid of troublesome termites is termite tent fumigation. Fumigation has the advantage of being able to treat every part of the building and its structure immediately.
Be careful! Tent fumigation can be performed only by a specialist. Do not try to do it yourself!
Recommendation: if you do find the evidence of subterranean termite infestation, it is highly advisable not to resort to the help of do-it-yourself elimination activities.
One should immediately contact the local expert on termites, because the consequences of such negligence could be irretrievable.Here you can learn more information about effective treatment method called tenting (fumigation): dangers for termites, preparing for fumigation and cleaning after, how long does this procedure last?
Non-Chemical Termite Control Methods
- Boric acid.
- Cardboard trap.
- Sun or freezing for affected furniture.
- Avoid moisture accumulation near the foundation. Divert water away with properly functioning downspouts, gutters and splash blocks.
- Reduce humidity in crawl spaces with proper ventilation.
- Remove piles of trash and dead trees stumps from the area – such objects can attract termites.
- Avoid direct wood-to-ground contact when building porches or decks.
- Siding, brick veneer or foam insulation should not extend below soil grade.
If you interested in more information of termites we recommend you to read the following articles:
- All types of termites. Are they harmful to humans? Can they bite you? And what is the difference between drywood and subterranean ones?
- What does swarmers of different species look like: drywood, subterranean, formosan?
- Signs of infestation outside and in the house: in walls or furniture.
- What does termite holes look like? What is droppings and is it toxic to humans? Do termites make noises?
- Posible termite damage, how does it look like? Examples of damage in walls and wood floors.
- All about flying termites: how do they look like, swarming season and what to do if there are swarmers in your house?
- How do they do nests and mounds? How to find it in your garden or inside the house?
Typical termite swarm that occurred inside a building in downtown:
Though termites are the real menace for timber and even stone buildings, still there are ways to eliminate termites nest colony within your yard.
It is not recommended to fight these insects yourself – only the highly experienced expert in termite eradication can help you to get rid of termites effectively. |
An Introduction to Roman Britain (AD 43–C.410)
To the Roman world, Britain was an unknown and mysterious land across the sea when Julius Caesar invaded in 55–54 BC. Despite inflicting defeats on the British, Caesar soon made peace with his opponents and returned to Gaul.
For almost a century afterwards the kingdoms of Britain were kept quiet with gifts and diplomacy. But when anti-Roman rulers came to power, the emperor Claudius – in need of a boost to his domestic prestige – launched a full-scale invasion in AD 43, intent on regime change and military glory.
Invasion and Conquest
This time the Romans enjoyed rapid military success. But gradual advance through southern England and Wales was halted in AD 60 by the rebellion of Boudicca, queen of the Iceni of East Anglia, incensed by the brutality of the conquest. The revolt was suppressed, but not before three recently founded Roman cities, Camulodunum (Colchester), Verulamium (St Albans) and Londinium (London), had been burned to the ground.
The advance resumed in AD 70 with the conquest of Wales and the north. The governor Agricola (AD 77–83) even succeeded in defeating the Scottish tribes at the Battle of Mons Graupius in AD 83.
Immediately after this victory, though, troops were pulled out of Britain to deal with invasions on the Danube frontier. As a result, the far north could not be held, and the army gradually fell back to the Tyne–Solway isthmus. It was here that the emperor Hadrian, visiting Britain in AD 122, ordered the building of his famous wall.
The emperor Antoninus Pius tried to reoccupy Scotland and built the short-lived Antonine Wall (AD 140–60). He was ultimately unsuccessful, however, and Hadrian’s Wall became the northern frontier of the province once more.
By now the three legions (army units of up to 6,000 men) remaining in Britain had settled in permanent bases. Auxiliary troops were scattered in smaller forts, mostly across northern England and along Hadrian’s Wall.
In the pacified parts of the province, cities had been founded as capitals for each of the tribal areas (the civitates) into which the Britons had been organised. A network of roads had developed, and landowners in the south began to build Roman-style villas.
Life for most ordinary Britons, who were farmers in the countryside, was slow to change. By degrees, however, they came into contact with villas, towns and markets. Here they could exchange their produce for Roman-style goods and see people dressing and behaving in Roman ways.
DIVISION OF BRITAIN
Shortly after AD 180 there was an invasion by tribes from what is now Scotland, who overran Hadrian’s Wall. Around this time most of the cities of Britain were enclosed within earthen defensive walls, which may have been linked to the invasion.
The Roman Empire was ruled from Britain for a brief period in AD 208–11, when the emperor Septimius Severus came to campaign north of Hadrian’s Wall. Severus divided Britain into two provinces, Britannia Superior (south) and Inferior (north), with capitals at London and York respectively. This prevented too many troops from being concentrated in the hands of a single governor who might have attempted to usurp power.
SAXON SHORE FORTS
Alongside the cities, which acquired stone walls at this time, the 3rd century saw increased numbers of small market towns, villages and villas. Roman objects were now more common in even the poorest rural settlements.
There were still threats to the province. In the north, beyond Hadrian’s Wall, the Picts had emerged as a formidable enemy, while to the south there was a growing threat from seaborne raiders. The so-called Saxon Shore forts around the south-east coast were built towards the end of the 3rd century in response, such as at Caister Roman Fort and Reculver.
Britain was part of the separatist ‘Gallic empire’ from AD 260 until AD 273, and again broke away from Rome under the usurpers Carausius and Allectus (AD 286–96). Emperor Constantius I recaptured the province in AD 296, and when he died in AD 306 after a campaign against the Picts, his son Constantine the Great was proclaimed emperor in York.
The end of Roman rule
After Constantine’s conversion in AD 312, Christianity was adopted more widely across the empire, including in Britain. In the 4th century Britain was reorganised as a ‘diocese’ consisting of four provinces, with military forces under the command of the Dux Britanniarum – the Duke of the Britains. The next 50 years or so were a golden age of agricultural prosperity and villa building, especially in the south-west.
But the later 4th century saw chronic insecurity and the great invasion known as the barbarian conspiracy of AD 367. Confident new building had ceased by the 370s. Repeated attempts to usurp the empire by generals based in Britain (the last being Constantine III in AD 407) drained the diocese of troops. By AD 410 Britain had slipped out of Roman control, its inhabitants left to fend for themselves.
The Corbridge Lion and Changing Beliefs in Roman Britain
Lions were commonly used as sacred symbols in Roman memorials, but the Corbridge lion is different. Find out what this extraordinary sculpture tells us about changing beliefs in Roman Britain.
Uncovering the Secrets of Hadrian's Wall
The remains of Birdoswald Roman Fort have revealed more about Hadrian’s Wall than any other site along the Wall.
Mithras and Eastern Religion on Hadrian’s Wall
A remarkable sculpture of Mithras found on Hadrian’s Wall reveals religious and military connections with distant parts of the Roman Empire.
The Mysterious Absence of Stables at Roman Cavalry Forts
How recent archaeological excavations on Hadrian’s Wall have revealed why it has always been so difficult to discover where Roman soldiers kept their horses.
Domestic Violence on Hadrian's Wall
A pair of skeletons at Housesteads Fort reveal a brutal side of everyday life in Roman Britain.
Votive Body-Parts, Eye Infections and Roman Healing
How a unique collection of finds reveals that the Roman city of Wroxeter had a temple dedicated to a god with the power to cure eye diseases
The Mysteries of Corbridge
From strange heads on pots to missing temples, there are many things about Corbridge Roman Town that continue to puzzle us.
Country Estates in Roman Britain
An introduction to the design, development and purpose of Roman country villas, and the lifestyles of their owners.
More about Roman England
Rome's success was built on the organised and practical application of ideas long known to the ancient world.
Romans: Daily Life
The daily experiences of most people in Britain were inevitably touched by its incorporation into the Roman Empire.
Most people in Roman Britain made their livings from a mixture of subsistence farming and exchange of specialist goods.
Romans: Food and Health
How the Roman conquest changed how people in Britain ate, and how they looked after their health.
Discover how, where and why a vast network of roads was built over the length and breadth of Roman Britain.
The Romans were tolerant of other religions, and sought to equate their own gods with those of the local population.
What kind of landscape did the Romans find when they conquered Britain, and what changes did they make?
Romans: Power and Politics
Britain was one of some 44 provinces which made up the Roman Empire at its height in the early 2nd century AD.
Previous Era: Prehistory
Prehistory is the time before written records. It's the period of human history we know the least about, but it's also the longest by far.
Next Era: Early Medieval
The six and a half centuries between the end of Roman rule and the Norman Conquest are among the most important in English history. This long period is also one of the most challenging to understand – which is why it has traditionally been labelled the ‘Dark Ages’. |
Home Fire Safety
House fires happen everyday. The biggest tragedy is that the majority of house fires are preventable.
Who's most at risk of dying in a house fire?
The Australasian Fire Authorities Council (AFAC) published a report in March 2005 detailing the findings of a national study into residential fire deaths in Australia and New Zealand.
The AFAC Report indicates that the groups of people at risk from dying in house fires are:
- children under the age of 4 years old
- people over the age of 65 (with vulnerability increasing with age)
- adults affected by alcohol.
General findings show that more deaths occurred during sleeping hours of the cooler months, May to September.
Most fires occurred in owner-occupied houses. Fires were mainly caused by
- electrical faults
- smoking materials
- open fires
Smoke alarms were not fitted in most of the homes where deaths occurred. In those that did have them, 31% of them were not working.
If you have people in your care in these vulnerable groups we recommend you read the following information and put measures in place to ensure the risk to these people is minimised. If you can't find the answer to your questions within these pages please contact us for assistance.
"Accidental Fire Fatalities in Residential Structures - Who's at Risk?" (March 2005) Australasian Fire Authorities Council, Melbourne, Australia.
Smoke alarms are compulsory for all residential buildings in South Australia.
Home owners are required, by Regulation 76B of the Development Act, 1993, to install battery powered or hard-wired (240 volt mains powered) smoke alarms.
Houses built since 1 January 1995 must be equipped with hard-wired smoke alarms. All other houses must be equipped with at least 9 volt battery powered smoke alarms. When a house with 9 volt battery powered smoke alarms is sold the new owner has six months to install alarms which are hard-wired to the 240 volt power supply or powered by 10 year life, non-replaceable, non-removable batteries.
Penalties apply for non-compliance.
Why do you need a smoke alarm?
Smoke obscures vision and causes intense irritation to the eyes. This, combined with the effects of the poisons in the smoke, can cause:
- impaired judgement
This reduces your ability to find an exit.
Most fire-related deaths result from inhaling toxic fire gases rather than from direct contact with flame or exposure to heat.
Correctly located smoke alarms in your home give early warning of fire, providing you with the precious time which may be vital to your survival.
Home fire escape plan
The installation of smoke alarms forms one part of a Home Fire Escape Plan. It is vitally important that every family has a complete Home Fire Escape Plan which all occupants of your household practise and understand.
Surviving a house fire
Have a plan
- Conduct fire drills with the whole family.
- Agree on a place to meet outside.
If fire strikes
- Get everybody out of the house.
- Meet at the designated place.
- Call the Fire Service on 000.
- Do not go back inside.
- If the fire is small and localised, extinguish the fire if it is safe to do so
- Keep wallets and handbags easily accessible
Stop, drop, cover and roll
- If clothes catch fire: STOP, DROP AND ROLL to smother flames while covering your face with your hands.
- To help someone else, throw a woollen blanket over them if they catch alight.
If there's smoke get down low and go, go, go
- In a fire the safest area for breathing is near the floor where the air is cooler and cleaner.
- Get down low and crawl to safety.
Know basic first aid
- Clean cold water cools burns and lessens the pain.
- Do not use butter, ice, cotton wool or ointments on burns.
- Do not remove burnt clothing from skin.
Install home fire fighting extinguisher
- Every home should have a properly maintained fire extinguisher and fire blanket.
Common causes of house fires
- Never leave the stove unattended.
- Check that electric cords, curtains, tea towels and oven cloths are at a safe distance from the stove top.
- Be careful of long flowing sleeves contacting gas flames.
- Do not sleep with electric blankets on or leave the house without switching them off.
- Never leave weighty objects on the bed when the electric blanket is on.
- Have your blanket checked by an authorised repairer if you suspect overheating.
- Always follow manufacturer´s instructions for care and storage.
- Inspect each blanket for wear and tear at the beginning of the cooler months.
- Always use a qualified electrician.
- Double adaptors and power-boards can overload power points.
- Install safety switches and correct fuses.
Smoking in bed:
- Smoking in bed can be fatal - tiny embers can smoulder unnoticed and burst into flame much later.
- Check light fittings for heat build up.
- Discard lampshades that are close to light globes & lamp bases that can be knocked over easily.
- Ensure recessed downlights are properly insulated from wood panelling or ceiling timbers.
- Store all flammable liquids such as petrol, kerosene, methylated spirits away from heat.
- Always check the label before use and storage.
- Use extreme care when pouring.
- Always clean lint filters after each load.
- Avoid drying bras in your dryer as the underwire can get caught and start a fire.
- Never leave burning candles unattended. Do not sleep with a burning candle.
- Make sure curtains and other flammable items are well away from burning candles.
- Make sure all appliances are professionally installed.
- Check that walls and floors are insulated from heat sources.
- Be careful where you place portable appliances.
- Never leave an open fire alight when you leave the house or go to bed.
- Place a mesh guard in front of open fires.
- Have your chimney and flue cleaned annually.
- Never leave children unattended near fires and heaters.
- Clothing should not be dried close to heaters or fires.
- Warn all children about playing with fire.
- Keep all matches, lighters and candles out of reach of small children.
- Teach young children to bring matches or lighters they find to an adult immediately.
- Teach older children that matches are a tool to be used in the presence of adults.
- Brief your babysitter on your fire plan - make sure they know all exits and emergency telephone numbers. Make sure the babysitter understands fire survival techniques.
Wheat bags or wheat pillows are often used to provide relief from the body's aches and pains but if they are used incorrectly they are also a fire and burn hazard.
Wheat bags are fabric bags filled with wheat (or other grains). They are heated in a microwave oven and then placed on the body to apply warmth.
Fires have occurred when wheat bags have been used as 'hot water bottles' to warm a bed. Further examples have been reported where wheat bags were found to be smouldering after they have been over-heated in a microwave oven.
Wheat bags have been used for many years as an inexpensive, convenient and reusable winter warmer and heat treatment for sore muscles. You just pop one in the microwave, heat for a couple of minutes and it's ready to use. However, wheat retains heat for a long time and the bags can be dangerous if used incorrectly.
If a wheat bag is over-heated the chance of ignition is greater if the wheat bag is insulated with blankets or a quilt when being used to warm a bed.
In addition, burns to the skin may occur with an over-heated wheat bag especially if it is being used on a baby, young child or an elderly person.
We recommend you consider the following fire safety guidelines when using wheat bags:
- Do not overheat wheat bags. Follow the manufacturer's instructions
- Use wheat bags only as a heat pack for direct application to the body. Don't use them as bed warmers
- Do not use wheat bags in bed - there is a chance you could fall asleep whilst they are in use
- Use wheat bags with extreme caution with the elderly
- Do NOT use wheat bags with babies or young children
- Do not reheat until the wheat bag has completely cooled. Reheating before the bag has cooled may be just as dangerous as overheating
- Watch for these signs of over-use: an over-cooked odour; a smell of burning; or, in extreme cases, smoking and/or charring. Discard the wheat bag after cooling if you observe any of these signs
- Do not put wheat bags into storage until they are cold. Leave them to cool on a non-combustible surface such as a kitchen sink |
- June 28, 2016
Old Vanilla–that was the name given to the huge Western Yellow Pine Tree outside my childhood window that smelled of vanilla cookies on dry summer days–died when I was eight years old. I loved that tree. To my surprise and delight, instead of cutting it down, my dad manicured it into a standing dead tree, which I’d later learn were called snags . For the next fifteen years I watched the tree slowly decompose. It sprouted shelf mushrooms, attracted insect-eating Pileated Woodpeckers, housed Mountain Chickadee families, sheltered dreys of squirrels, and provided perches for innumerable flying creatures. This tree was as alive as a snag as it was when it was living.
Dead trees nurture new life in ecologically important ways. Without snags some 85 species of North American birds, numerous small mammals, insects, fungi, and lichens would be without valuable habitat. Snags are nature’s apartment complexes and cafeterias. To have thriving, healthy habitat, in many places, means snags are a part of the equation.
Decomposition of a tree starts quickly when disease, damage, fire, etc., affect the outer cambium of the bark. Fungi move in and begin the critical work of breaking the tree down into vital nutrients to replenish soil. The rate of decomposition is influenced by moisture, sun/shade exposure, forest age, species composition (hardwood vs. softwood), and animal activities.open_in_new The rotting process creates ideal soft outer layers in the hard trunk for primary cavity nesters to excavate.
Woodpeckers and sapsuckers are species of birds known as primary cavity nesters. They are the ones that actually create the holes, or cavities, in dead wood in which to build nests and rear their young. These birds are also experts at hammering holes into the bark of trees to gain access to bark-dwelling insects and sap and even drum on trees to defend nesting territories.
Woodpeckers in different regions of the United States will create cavities in soft or hardwoods, depending on the species of trees present and the birds’ habitat preferences. Changes in forest make-up due to disease, human intervention, and climate change can have a dramatic impact on the birds that live there. For instance, we know Ponderosa Pine forests in the Pacific Northwest saw a decline in primary cavity nesting birds when snags were actively removed from forests.open_in_new Privately held forests tended to have even fewer-than-average snags leading to a hypothesis that private management practices have huge potential to impact populations, even when it seems that what you choose to do on your small property can’t possibly scale to levels that matter, evidence suggests otherwise.
As the field of forestry emerged, the desire to prevent fires and raise trees for harvest actively lead to the practice of removing snags. Snags were viewed as ecological menaces. Dead trees were and are intentionally removed from forests to minimize the risk of fire and reduce the population of bark beetles infecting healthy trees. This destabilized interdependencies between trees, insects, birds, and other wildlife that rely on snag habitat.open_in_new
By consuming bark beetles, and other tree-eating insects, cavity nesting birds help create a balance in insect populations. When snags are eliminated the potential for insect invasions increases as the cavity nesters lose habitatopen_in_new and can’t reproduce at high enough rates to keep up with insect populations. Sometimes this isn’t a major ecological issue because bark beetles only attack weakened or dying trees, leaving healthy trees alone;open_in_new but, this is not true for all nonnative beetles, nor is it true for native species, like the mountain pine beetle. These species, when given the opportunity will advance on healthy trees when their populations rise in the absence of sufficient predation and extended foraging seasons induced by climate change.open_in_new
We may think of an owl snug in their hole in a tree, but they don’t make those holes. Those are thanks to the woodpeckers.
Secondary cavity nesters, such as certain species of chickadees, owls, nuthatches, creepers, ducks, bluebirds, flycatchers, swallows, titmice, wrens, and warblers, all benefit from the hole-creating activities of the primary cavity nesters. Abandoned woodpecker nests and drilling sites will quickly be adopted by secondary nesters. Without the efforts of the woodpeckers and sapsuckers, many of these secondary nesters would lack the required habitat to raise their young. This is why primary cavity nesters are considered keystone speciesopen_in_new as they provide essential nesting cavities for other species and their absence from an ecosystem has devastating effects. An example of this is in eastern pine forests where there is a positive correlation between areas with snags and all nesting birds.open_in_new Similar findings also occur in western Ponderosa Pine Forests.open_in_new
Features of Quality Snags
- Large diameter, tall trees
- Existing woodpecker holes or cavities
- Fungal Conks (mushrooms) present
- Wounds or scars from fire or lightning present
- Dead areas on living trees
- Both sound and decayed wood
- For larger land area management, maintain snags in areas of both low and high tree density and across a range of topography (ridges, slopes, and valleys)
- Snags arranged solitary or in small clumps of up to ten
Snags support more than just bird diversity. In fact, in Estonian peatland forests 25% of lichens live only on snags.open_in_new Similarly, a majority of flying squirrels studied in Ontario, Canada used declining trees for nesting, while another quarter of the the population relied exclusively snags. open_in_new
Tree-roosting bats use snags as well. Bats largely prefer large snags with pieces of exfoliating bark.open_in_new They will crawl behind pieces of bark to roost and raise young. Bat houses , like nest boxes, are an option to supplement when snags aren’t present, but, preserving snags in forest ecosystems is vital for protecting wildlife diversity at a larger scale.
How many snags are needed to support wildlife? The answer to this question depends on the ecology of a specific region. Researchers, when pressed, will say one to three snags per acre are sufficient; but, optimum snag density probably varies by region. Knowing the local flora and fauna and the interactions between trees, fire, tree-eating insects, primary nesters, sapwood decay, and other requirements of local wildlife are important when making recommendations about snag numbers and density.open_in_new If you own several acres of forest, this is the perfect question to work with a forester on when creating a wildlife management plan for your property.
When we envision snags in the landscape we usually think of larger forested areas. Can snags work in an urban setting to support birds as well? Evidence suggests that cavity nesting birds do well in any setting as long as snags are present.open_in_new This same research found that the bigger the snag, the more it was successfully used to nest. Tall dead trees can pose a problem for urban areas where infrastructure and people can be threatened by a falling tree. In instances where a falling tree directly threatens a building, or cars, we recommend manicuring a dead or dying tree into an artificial snag approximately six feet in height. Or, if it doesn’t put a building at risk, you can leave it as is, or select a height appropriate to the location of the tree.
Skeptical a snag can look good?
Techniques for making a snag from a dead or dying tree in an urban area include girdling and tree topping. Any work involving tree pruning or modification should be done by an expert, especially in an urban area where mistakes could result in property damage.
Making an Artificial Snag from a Diseased or Dying Tree
- Wait until nesting season is over before creating a snag, September-December is a safe time in most of N. America to avoid disrupting nesting birds and wildlife.
- A snag in urban areas should be approximately 6 feet tall and not endanger infrastructure or people if it were to fall.
- A snag in a forest or more rural area can be left taller, as long as it is not a hazard.
- Have a professional girdle or top the tree.
- Prune the tree in a way that mimics how it would look in the wild.
- Keep some living branches to slow the decay.
- Avoid creating snags near structures or roads.
- Manage and prune off new growth around the tree base.
- Create cavities for birds by drilling various size holes.
- Cut long slits in the top of the tree and peel back areas of the bark to create roosting areas for bats.
At the Presido National Park artists and ecologists collaborated on a project to create a fully functional snag replacement. It included nesting cavities for the Pygmy Nuthatch and Bumblebee, a cantilevered perch for the Black Phoebe, louvered crevices for the Yuma Myotis bat, cover logs for the California Slender Salamander, and a hibernaculum or winter residence for the Coast Garter Snake.
Snags are vital features for wildlife in urban, suburban, and rural areas. Leaving them standing to naturally decay in the landscape requires us to rethink what a beautiful, well-tended yard or home entails. This practice of leaving dead wood benefits your native wildlife and may just attract some new visitors to your yard.
Have a Snag?
Add it to your Map
After completing your site outline and overlaying habitats, add a snag as an object by zooming into the location on the map you’d like to add it.
In the tool shed, choose “Third” and scroll through the object options until you come to snag.
Add the object to your map and tell us more about it by clicking the green Info button. Upload a picture and tell us what lives in your snag. |
Training to improve an athlete's performance obeys the three principles
of training, summarised here.
To improve the range of movement for a particular joint action, you have
to perform exercises for the specific mobility requirements of a given event. The coach can analyse the technique of his/her
event, identify which joint actions are involved and determine which need to be improved in terms of the range of movement.
A thrower, for example, might require improvements in his/her shoulder and spine mobility. A hurdler might need to develop
his/her hip mobility.
The amount and nature of the mobility training required by each athlete
will vary according to the individual athlete's event requirements and his/her individual range of movement for each joint
action. It may be necessary to measure the range of movement for particular joint actions to determine the present range and
Specificity is an important principle in strength training, where the exercise
must be specific to the type of strength required, and is therefore related to the particular demands of the event. The coach
should have knowledge of the predominant types of muscular activity associated with his/her particular event, the movement
pattern involved and the type of strength required.
Although specificity is important, it is necessary in every schedule to
include exercises of a general nature (e.g. power clean, squat). These do not relate too closely to the movement of any athletic
event. They do, however, give a balanced development, and provide a strong base upon which highly specific exercise can be
When an athlete performs high velocity strength work, the force he/she generates
is relatively low and therefore fails to stimulate substantial muscular growth. If performed extensively the athlete may not
be inducing maximum adaptation with the muscles. It is important therefore for the athlete to use fast and slow movements
to fully train the muscles.
When an athlete performs a mobility exercise he/she should stretch to the
end of his/her range of movement. In active mobility the end of the range of movement is known as the active end position.
Improvements in mobility can only be achieved by working at or beyond the active end position.
Passive exercises involve passing the active end position, as the external
force is able to move the limbs further than the active contracting of the protagonist muscles Kinetic mobility exercises
use the momentum of the movement to bounce past the active end position
A muscle will only strengthen when forced to operate beyond its customary
intensity. The load must be progressively increased in order to further adaptive responses as training develops and the training
stimulus is gradually raised. Overload can be progressed by increasing these factors:
- The resistance e.g. adding 5kg to the barbell
- The number of repetitions with a particular weight
- The number of sets of the exercise (work)
- The intensity- more work in the same time, reducing the recovery periods
Improved ranges of movement can be achieved and maintained by regular use
of mobility exercises. If an athlete ceases mobility training, his/her ranges of movement will decline over a period of time
to those maintained by his/her other physical activities.
When training ceases the training effect will also stop. It eventually gradually
reduces at approximately one third of the rate of acquisition.
Athletes must ensure that they continue strength training throughout the
competitive period, although at a much reduced volume, or newly acquired strength will be lost
THE LEARNING PROCESS
The coach will be required to facilitate the learning of new technical skills
by the athletes. To achieve this the coach will need to develop his/her knowledge of the learning process and the various
Ideally a skill should be taught as a whole as the athlete can appreciate
the complete movement and execution of a skill. The whole method of instruction can sometimes mean the athlete having to handle
complex movements e.g. the whole high jump technique.
When a skill is complex or there is considered to be an element of danger
for the athlete, then it is more appropriate to breakdown the complex movement into its constituent parts. The parts can then
be taught and then linked together to develop the final skill.
When part instruction is used it is important that the athlete is demonstrated
the whole skill so that they can appreciate the end product and understand how the set of parts will develop the skill.
Whole - Part - Whole Instruction
Initially the athlete attempts the whole skill and the coach monitors to
identify those parts of the skill that the athlete is not executing correctly. Part instruction can then be used to address
the limitations and then the athlete can repeat the whole skill with the coach monitoring for any further limitations. No
one method is suitable to all occasions, but studies have shown that:
- Simple skills (and perhaps 'simple' is relative to each individual) benefit from the whole method
- Skills of intermediate difficulty benefit from the part method
- Closed skills are often taught with part instruction
- Difficult skills are best dealt with by oscillating between part and whole
Types of skill
There are a number of different types of skills:
- Cognitive - or intellectual skills that require thought processes
- Perceptual - interpretation of presented information
- Motor - movement and muscle control
- Perceptual motor - involve the thought, interpretation and movement skills
How to teach a new skill
The teaching of a new skill can be achieved by various methods:
- Verbal instructions
- Photo sequences
The Learning Phases
There are three stages to learning a new skill and these are:
- Cognitive phase: identification and development of the component parts of the skill
- Associative phase: linking the component parts into a smooth action
- Autonomous phase: developing the learned skill so that it becomes automatic
The leaning of physical skills requires the relevant movements to be assembled
component by component, using feedback to shape and polish them into a smooth action. Rehearsal of the skill must be done
regularly and correctly.
Appropriate drills should be identified for each athlete to improve specific
aspects of technique or to correct faults. Drills should not be copied slavishly but should be selected to produce a specific
effect. e.g. Running Drills are used to develop important components of proper and economical running technique. Whichever
drills are used they must be correct for the required action and should be the result of careful analysis and accurate observation.
HOW TO ASSESS PERFORMANCE
Initially, compare visual feedback from the athlete's movement with the
technical model to be achieved. Athletes should be encouraged to evaluate their own performance. In assessing the performance
of an athlete consider the following points:
- Are the basics correct?
- Is the direction of the movement correct?
- Is the rhythm correct?
It is important to ask athletes to remember how it felt when correct examples
of movement are demonstrated (kinaesthetic feedback). Appropriate checklists/notes can be used to assist the coach in the
assessment of an athlete's technique.
How faults are caused
Having assessed the performance and identified that there is a fault; you
need to determine why this is happening. Faults can be caused by:
- Incorrect understanding of the movement by the athlete
- Poor physical abilities
- Poor co-ordination of movement
- Incorrect application of power
- Lack of concentration
- Inappropriate clothing or footwear
- External factors e.g. weather conditions
- Strategies and tactics
Strategies are the plans that we prepare in advance of a competition to
place an individual or team in a winning position. Tactics are how we put these strategies into action. Athletes in the associative
phase of learning will not be able to cope with strategies but the athlete in the autonomous phase should be able to apply
strategies and tactics. To develop strategies and tactics we need to know:
- The strengths and weaknesses of the opposition
- Our own strengths and weaknesses
- Environmental factors
An Eastern European Approach
Consideration must be given to the approach adopted by the former Eastern
Bloc countries to technique training. The aim is to identify the most fundamental version of a technique, one that is basic
and essential to more advanced techniques. Example for the shot - basic model would be the stand and throw, more advanced
would be the step and throw and finally followed by the rotation method.
This fundamental component is taught first and established as the basis
for all further progressions. Deriving from the fundamental component are exercises that directly reinforce the required movement
patterns. These exercises are known as first degree derivatives. They contain no variations of movement that may confuse the
THE FOUR TYPES OF PRACTICE
There are four types of practice:
- Variable - the skill is practiced in the range of situations that could be experienced. Open skills are
best practiced in this way
- Fixed - a specific movement is practiced repeatedly, known as a drill. Closed skills are best practiced
in this way
- Massed - a skill is practiced without a break until the skill is developed. Suitable when the skill is
simple, motivation is high, purpose is to practice a skill, and the athletes are experienced
- Distributed - breaks are taken whilst developing the skill. Suitable when the skill is new or complex,
fatigue could result in injury, motivation is low or poor environmental conditions
Distributed practice is considered to be the most effective.
THE FIVE STEPS TO SUCCESSFUL COMMUNICATION
Communication is the art of successfully sharing meaningful information
with people by means of an interchange of experience. Coaches wish to motivate the athletes they work with and to provide
them with information that will allow them to train effectively and improve performance. Communication from the coach to athlete
will initiate appropriate actions. This however, requires the athlete to not only receive the information from the coach but
also to understand and accept it. Coaches need to ask themselves:
- Do I have the athlete's attention?
- Has the athlete understood?
- Does the athlete believe what I am telling him/her?
- Does the athlete accept what I am saying?
- Am I explaining myself in an easily understood manner?
How to interpret non-verbal messages
At first, it may appear that face-to-face communication consists of taking
it in turns to speak. While the coach is speaking the athlete is expected to listen and wait patiently until the coach finishes.
On closer examination it can be seen that people resort to a variety of verbal and non-verbal behaviour in order to maintain
a smooth flow of communication.
Such behaviour includes head-nods, smiles, frowns, bodily contact, eye movements,
laughter, body posture, language and many other actions. The facial expressions of athletes provide feedback to the coach.
Glazed or down turned eyes indicate boredom or disinterest, as does fidgeting. Fully raised eyebrows signal disbelief and
half raised indicate puzzlement. Posture of the group provides a means by which their attitude to the coach may be judged
and act as pointer to their mood. Control of a group demands that a coach should be sensitive to the signals being transmitted
by the athletes. Their faces usually give a good indication of how they feel, and a good working knowledge of the meaning
of non-verbal signals will prove invaluable to the coach.
Difficulties in communicating with an athlete may be due a number of issues
including the following:
- The athlete's perception of something is not the same as yours
- The athlete may jump to a conclusion instead of work through the process of hearing, understanding and accepting
- The athlete may lack the knowledge needed to understand what you are trying to communicate
- The athlete may lack the motivation to listen to you or to convert the information given into action
- The coach may have difficulty in expressing what you want say to the athlete
- Emotions may interfere in the communication process
- There may be a clash of personality between you and the athlete
These blocks to communication work both ways and coaches need to consider
the process of communication carefully.
The six elements of effective communication
Effective communication contains six elements:
- Clear: Ensure that the information is presented clearly
- Concise: Be concise, do not lose the message by being long winded
- Correct: Be accurate, avoid giving misleading information
- Complete: Give all the information and not just part of it
- Courteous: Be polite and non-threatening, avoid conflict
- Constructive: Be positive, avoid being critical and negative
When coaches provide information to the athlete which will allow him/her
to take actions to effect change it is important that they provide the information in a positive manner. Look for something
positive to say first and then provide the information that will allow the athlete to effect a change of behaviour or action.
- Develop their verbal and non-verbal communication skills
- Ensure that they provide positive feedback during coaching sessions
- Give all athletes in their training groups equal attention
- Ensure that they not only talk to their athletes but they also listen to them as well
Improved communication skills will enable both the athlete and coach to
gain much more from their coaching relationship. |
Simple Definition of conform
: to be similar to or the same as something
: to obey or agree with something
: to do what other people do : to behave in a way that is accepted by most people
Full Definition of conform
: to give the same shape, outline, or contour to : bring into harmony or accord <conform furrows to the slope of the land>
1 : to be similar or identical; also : to be in agreement or harmony —used with to or with <changes that conform with our plans>
2 a : to be obedient or compliant —usually used with to <conform to another's wishes> b : to act in accordance with prevailing standards or customs <the pressure to conform>
Examples of conform in a sentence
Most teenagers feel pressure to conform.
<the list conforms with the contents of the trunk>
Origin of conform
Middle English, from Anglo-French conformer, from Latin conformare, from com- + formare to form, from forma form
First Known Use: 14th century
Synonym Discussion of conform
CONFORM Defined for Kids
Word Root of conform
The Latin word forma, meaning “form” or “shape,” gives us the root form. Words from the Latin forma have something to do with shape. The form of a person or thing is its shape. To conform is to fit in with others in form, shape, or manner. Something formal, such as dinner, follows a specific custom or form. The format of something, such as a book, is its general shape and arrangement.
Seen and Heard
What made you want to look up conform? Please tell us where you read or heard it (including the quote, if possible). |
[Why does leap year exist?]
One day is defined by the Earth rotation, and one year by Earth revolution around the Sun. One revolution of the Earth is not 365 days but accurately 365.2422days. It is leap year that adjusts this decimal fraction.
[How to calculate]
If a dominical year can be divided by 4 without remainder, it is a leap year; otherwise, a dominical year is a common year.
In addition, if a dominical year can be divided by 100 without remainder, it is a common year.
If a dominical year can be divided by 400 without remainder, it is a leap year.
[Ground of calculation]
Since the error is 0.2422 day per year, it will accumulate into 0.9688 day for four years. In order to compensate such error, we have a leap year of 366 days once for four years.
There yet remains an error of -0.0312 day (-0.0078 day as an annual average). Since this error of -0.0078 day will accumulate into an error of -3.12 days for 400 years, only three leap years, in place of common years, are to be provided for 400 years.
It is the ground of calculation for leap year, and yet further there remains an error of -0.12 day (-0.0003 day as an annual average) that will in turn accumulate into an error of just one day for 4882 years (that is, 3300 years since 1582 when the leap year system was established). According to the description in some book, "An error of one day in itself does not concern people today." I expect that the human being 49th century will manage to solve this problem.
[History of leap year]
This system was established by the Pope Gregory XIII in 1582.
Photograph| Mathematical Formulas| Kodawari House| Pinpoint StreetView| |
Gematria is the practice of systematically assigning numeric values to words. Each letter of the alphabet represents a particular number, and the value of any given word or phrase can be determined by adding up its constituent letters. Gematria was most likely invented by the Greeks, who called it isopsephia, or perhaps by the Jews. It is a key component of Kabbalah (so much so that Aleister Crowley and others have mistakenly used kabbalah as a synonym for gematria) and is also of interest to Christians because of its connection with the Book of Revelation. When John writes of the beast that "the number of his name" is 666, he presumably has some form of gematria in mind — but what form? Greek? Hebrew? Could there be such a thing as English gematria?
The Greek and Hebrew systems of gematria have a certain naturalness to them because, unlike Roman numerals, Greek and Hebrew numerals make use of the entire alphabet. In both systems, the first nine letters represent the numbers from 1 to 9, the next nine stand for the tens (10 to 90), and the remaining letters represent the hundreds. These numeral systems are well-established outside the context of gematria and therefore do not seem arbitrary. The Latin alphabet is another story. Some have attempted a gematria of Roman numerals (for example, adding up the Roman numerals in the supposed papal title Vicarius Filii Dei to prove that the pope is the antichrist), but such a system is unsatisfying because it simply ignores two-thirds of the alphabet. Another option is to imitate the 1-9/10-90/100-900 system of Greek and Hebrew, but this seems arbitrary and contrived when applied to English. Perhaps the most natural system would be A=1, B=2, … Z=26, but even this is slightly arbitrary because it is merely one possibility among many; it's also undesirable for beast-hunting purposes because only very long phrases can add up to as big a number as 666.
Scott Branson came up with the idea of taking a list of words and phrases he considered antichrist-related (such as mark of the beast, necromancy, vaccination, and Confucius) and using a computer program to find the gematria code that would maximize the number of such words that added up 666. The Translucent Amoebae Consortium's Cabbage Codes are another example of the same thing. These systems "calibrate" the gematria code by testing it against a target set of words. If the code gives the "correct" value (666, in this case) for those words, it must be a good code.
The set of words used remains rather arbitrary, though. My own system, Calibrated Gematria, attempts to be more objective. The target set used to calibrate the code consists of English number words. People will inevitably have different ideas about which words ought to add up to 666, but everyone can agree that the best value for "one" is 1, the best value for "fourteen" is 14, and so on.
The Calibrated Gematria Code
The code used in Calibrated Gematria is as follows:
|A = 6||H = 29||O = -9||V = 8|
|B = 8||I = -11||P = -13||W = 11|
|C = -5||J = -14||Q = -29||X = 28|
|D = -24||K = 24||R = -26||Y = -1|
|E = 0||L = -7||S = -11||Z = 35|
|F = 8||M = -12||T = 0|
|G = -10||N = 10||U = 31|
The code itself may look arbitrary, but the results are impressive: zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, six = 6, seven = 7, eight = 8, nine = 9, ten = 10, eleven = 11, twelve = dozen = 12, fourteen = 14, fifteen = 15, sixteen = 16, seventeen = 17, eighteen = 18, nineteen = 19, twenty = 20, twenty-one = 21, twenty-two = 22, twenty-three = 23, twenty-four = 24, twenty-five = 25, twenty-six = 26, twenty-seven = 27, twenty-eight = 28, twenty-nine = 29. In addition, the names of playing cards have their correct values: ace = 1, deuce = 2, jack = 11, queen = 12, and king = 13.
Larger numbers do not have the correct values, but that is to be expected. The gematria stops working at the point at which the number system begins to use multiplication as well as addition. The -teen sufix means "add ten," so it's easy to accommodate; -ty, though, means "multiply by ten" — something which gematria, being based solely on addition, cannot properly deal with. We can fudge it for twenty by choosing an appropriate value for the letter Y, but it's impossible to continue the pattern for thirty, forty, and so on. Larger numbers can be expressed indirectly, though, by using the word plus, which conveniently equals 0. Thus, two plus two = 4, twenty-nine plus eighteen = 47, and so on.
Again because gematria is addition-based, non-addition mathematical expressions do not reliably yield the correct number, but there are a few that do so by chance: thirteen minus two = 11, seven times negative three = -21, and minus six times three = 18.
The Devil's Number
Calibrated Gematria appears to have two defects. One is that thirteen, unlike every other number word below thirty, does not have the correct value — a flaw in an otherwise perfect system. The other is that many of the letters have negative values, making it practically impossible to get as large a number as 666 — and what good is gematria if you can't use it to accuse people of being the antichrist? But these two problems, as it turns out, solve each other.
Calibrated Gematria does not identify the number 13 directly because to say "thirteen = 13" in so many words would be to violate the taboo that always goes with deeply unlucky things. But when we look at indirect ways of pointing to that number, we find that Calibrated Gematria is hardly ignoring 13. Far from it: number thirteen = 13, after twelve and before fourteen = -13, thirteen plus zip = 13, baker's dozen = 13, and jinx = 13, for example. Perhaps most appropriately of all, triskaidekaphobia — the technical term for the superstitious fear of the number thirteen (i.e., a very negative view of the number) — adds up to -13.
Nor is Calibrated Gematria silent on the number of the beast. Actually, there are two such numbers to consider, because some New Testament manuscripts give the beast's number as 616 rather than the more familiar 666. In Calibrated Gematria, six one six = 13, and six six six = 18. Where can you find the number of the beast in your Bible? That's right, Revelation 13:18.
Calibrated Gematria repeatedly identifies 13 as a sort of stand-in for 666: six hundred sixty-six = 13; the beast's mark = 13; the great beast Satan = 13; a beast rising up out of the sea = 13; a great beast having seven heads and ten horns = 13; a fiend in human form = 13; atheist = 13; Baal = 13; death, hell, and the devil = 13. The conclusion is clear: 666 is an "evil" number from the Graeco-Hebrew world of the New Testament and is suitable for Greek and Hebrew gematria; but Calibrated Gematria is for English and uses the "evil" number from English culture, 13.
So whom does Calibrated Gematria peg as the antichrist? I've tried many, many names, but so far only one adds up to 13: none other than George W. Bush! There are several names that add up to -13, though — most notably, John Kerry, Bush's rival in the 2004 election. Note also: George Bush = U.S. President Bush = the great beast = thirteen = 2; George Walker Bush = the antichrist = 10; Shrub = 31 (13 backwards).
There are some interesting parallels here with Simple English Gematria (S:E:G:, the system where A=1, B=2, and so on down to Z=26). The following values are from S:E:G:, not Calibrated Gematria:
- six hundred threescore and six = 313 (13 combined with its mirror image).
- triple six = three sixes = 132 (13 plus 2, the value of thirteen in Calibrated Gematria).
- George W. Bush = John F. Kerry = 130.
- Note: In Calibrated Gematria the number thirteen is the number of the beast. = 666.
The three canonical archangels match three of the four cardinal directions: Michael = west = 0; Raphael = east = -5; Gabriel = -40; south = 40. Who then is the angel of the north? Or maybe that's the wrong question. Raphael has called himself "one of the seven who stand before the throne" — and throne = north = 4. So maybe the throne is in the north and the seven angels stand at the other points of the compass. Naturally the big three take the three remaining cardinal directions, as indicated by the gematria, and the four other angels must be stationed at the northeast, northwest, southeast, and southwest, respectively. |
- Heritage Daily: “Australia’s Oldest Rock Art Discovered by USQ Researcher”
In the Arnhem Land plateau area of in Australia’s Northern Territory, in a place so remote it is best reached by helicopter, is Narwala Gabarnmang, the “Sistine Chapel of rock art sites.” Discovered during a 2006 survey sponsored by the Jawoyn Association Aboriginal Corporation, Narwala Gabarnmang is a huge open rock shelter. The cave’s ceiling is adorned with hundreds of rock paintings.
“Rock art is notoriously difficult to date and although we know that people had occupied this site at least 45,000 years ago we did not know how old the art was,” says University of South Queensland’s Bryce Barker. Rock paintings are difficult to date because they are often painted with minerals and contain no organic material for carbon dating. Therefore, Barker’s discovery of a charcoal rock drawing last year was particularly exciting.
Carbon-14 analysis dates this piece of Narwala Gabarnmang art to 28,000 years. That beats the previous Australian rock art record held by the Bradshaw paintings in Kimberley. “The Bradshaws are often talked about as being the oldest rock art in Australia, but the oldest firm date for them is 16,000–17,000 years taken from a wasp nest covering the art,” Barker explains.1 “Some rock art has previously been dated older than 28,000 but there are problems with [those examples].”2 However, Barker adds, unlike the Bradshaws, the art he found at Narwala Gabarnmang has been “unequivocally dated,”3 assigning it an age in the same ballpark as the bears painted in France’s Chauvet cave (though those dates are also disputed4).
“It puts Aboriginal people up there as among the most advanced people in human evolution,” Barker explains. “Some of the earliest achievements by modern humans were happening in this country.”5
Evidence supporting a high level of cultural achievement early in Australian Aboriginal history supports other findings from Narwala Gabarnmang. Jawoyn Homeland Project archaeologists have also found a stone axe with a ground edge. “Carbon dating around the entire piece returned a date of 35,500 years, making it the oldest of its type in the world,”6 according to the Jawoyn Association’s website. Barker points out that such “stone tool technology” elsewhere in the world is associated with much later dates. Furthermore, carbon dating of charcoal from the cave where Barker found the charcoal rock drawing has revealed “Aboriginal people were visiting the site more than 45,000 years ago.”7
Such dating methods are based on assumptions that cannot be proven. In the case of carbon-14 dating, one such assumption is that the plants buried in strata deep in the geologic record had the same ratio of carbon-14 to carbon-12 as those today. However, there is no way to know if the production and atmospheric concentration of carbon-14 were the same when those plants were buried. In fact, one factor that distinguished the world in which those plants were buried from the world today is the mass of the biosphere. Based on the vast quantity of fossil fuels buried in the earth, the world prior to the global Flood may be safely envisioned as having a far larger biosphere (many more plants and animals) than that of the post-Flood world. Massive destruction by the Flood would have radically altered that. Therefore, because scientists normally do not account for the much larger biosphere in the pre-Flood world and assume the proportion of carbon-14 to carbon-12 has always been the same, they calculate dates much higher than the dates calculated on the basis of the Bible’s chronology. (Be sure to read more about the strengths, weaknesses, and corrections of carbon-14 dates at A Creationist Puzzle.)
The shadow of an additional bit of fallacious reasoning haunts Barker’s “unequivocally dated” art. The charcoal was carbon-dated at 28,000 years, but the charcoal and the drawing are not necessarily the same age. The wood used to make the charcoal could have been substantially older than the artwork.
Thus, while we would discount the extreme absolute ages for the cave art, ground-edge stone axe, and charcoal, we certainly do not dispute the claim that Australian Aborigines were capable of advanced abstract thinking, genuine artwork, and technology comparable to or exceeding that of indigenous ancient people the world over.
All people, including Australian Aborigines, are descended from Noah’s family.
All people, including Australian Aborigines, are descended from Noah’s family. After God confused the languages of people at the Tower of Babel, in the years following the global Flood of about 2350 BC, groups of people dispersed through the world. Each group surely possessed knowledge of some current technology in addition to their new language. Thus, we expect to see evidence of intelligent people scattered all over the world, including these paintings in Narwala Gabarnmang and cave paintings in Spain (like the record-breaking cave art described last week in Handprints in Northern Spain). Aborigines were not at the head of the human evolutionary pack but were, like groups of people all over the world, using their intelligence and their skills to rebuild civilizations after the Flood and migration from Babel.
Australia’s Aborigines, furthermore, have been historic victims of racially driven devastation. Such abuse began with European colonization. Later, in the 19th century, on the basis of Darwinian philosophy, Aborigines were hunted for their scientific value. Colonial policies eventually gave way to more protective paternalistic policies but continued for many years to treat the Australian Aborigines disrespectfully. Thankfully, those policies have now ended. Yet how much Aboriginal suffering would have been averted through the years, both pre- and post-Darwin, if those who interacted with the original indigenous people of Australia had acknowledged the wisdom spoken by the apostle Paul in Acts 17:24–31. God our Creator “has made from one blood every nation of men to dwell on all the face of the earth, and has determined their preappointed times and the boundaries of their dwellings, so that they should seek the Lord, in the hope that they might grope for Him and find Him, though He is not far from each one of us.” Had they viewed Aborigines with biblical wisdom, they would have known they owed them the love of Christ and the knowledge of His gospel.
- Handprints in Northern Spain
- The Human Kind
- News to Note, February 2, 2008
- Doesn’t Carbon-14 Dating Disprove the Bible?
- Carbon-14 Dating
- Carbon-14 in Fossils and Diamonds
- A Creationist Puzzle
- Is the Wood Recently Found on Mt. Ararat from the Ark?
For More Information: Get Answers
Remember, if you see a news story that might merit some attention, let us know about it! (Note: if the story originates from the Associated Press, FOX News, MSNBC, the New York Times, or another major national media outlet, we will most likely have already heard about it.) And thanks to all of our readers who have submitted great news tips to us. If you didn’t catch all the latest News to Know, why not take a look to see what you’ve missed? |
Japanese Encephalitis is an infection of the brain caused by a virus. The virus is transmitted to humans by mosquitoes (culex). It is a severe viral disease, which can affect CNS and cause severe complications and death.
The virus that causes Japanese encephalitis is called an arboivirus, which is an arthropod-borne virus. Mosquitoes are a type of arthropod. Mosquitoes in a number of regions carry this virus and are responsible for passing it along to humans. As the virus that causes Japanese encephalitis is carried by mosquitoes, the number of people infected increases during those seasons when mosquitoes are abundant. This tends to be in the warmest, rainiest months. In addition to humans, other animals like wild birds, pigs and horses are susceptible to infection with this arbovirus. Because the specific type of mosquito carrying the Japanese encephalitis arbovirus frequently breeds in rice paddies, ponds, pools, ditches, etc, the disease is considered to be primarily rural problems. Only the culex mosquitoes can transmit the Japanese encephalitis viruses and they bite between dawn and dusk. Culex mosquitoes prefer pig rather than human blood, and JE virus multiplies in pig’s body. |
Viva Questions Experiments Attenuation in Optical Fibers
Attenuation losses in Optical Fibers
An optical fiber is made of either glass or plastic material. The refractive index of the material plays an important role in the transmission of an optical signal used further in communications. This signal travel inside the core of an optical fiber with the speed of light. As you know the total internal reflection (TIR) is the principle on which the light signal propagates in the core which takes to multiple reflections at the core-cladding interface. For long distances, the optical fibers are grounded under the surface of the earth. So it is obvious the optical fiber will be bend at corner places which may arise the possibility of signal loss in optical fiber.
Here some hints through the questions you may take also understanding the working and reason of signal losses.
1Q. What is the critical angle?
2Q. What is an optical fiber?
3Q. On the basis of the refractive index, how many types of optical fiber are?
4Q. What is guided modes in optical fiber?
5Q. How many modes are possible in step index optical fiber?
6Q. How many modes are possible in Graded-Index Optical Fiber?
7Q. Is refractive index maximum or minimum at the central axis of the graded index optical fiber core?
8Q. What do you mean by acceptance angle and cone for an optical fiber?
9Q. Numerical Aperture (NA) is dimensionless, what information you get from it?
10Q. If there are two optical fibers one have NA=0.23 and second have NA=0.52 which have more capacity to carry a large number of optical signals?
11Q. In step index optical fiber two extreme modes are available one is with the extreme ray that makes the maximum angle (equal to the acceptance angle) and second correspond to the central axis. Does both the light rays reach at the receiving end at the same time or not?
12Q. What do you understand by the bandwidth and normalized frequency of an optical fiber?
13Q. Can you explain the dispersion in the prism and how it is different in optical fibers?
14Q. What are the two transmission characteristics in optical fibers?
15Q. In which unit you observe the signal loss?
16Q. What is the formula for attenuation loss?
17Q. What are the main applications of the optical fibers?
NOTE: use it for http://www.samm.com/calculating-fiber-loss-and-maximum-distance-estimates
- Properties of optical fibers 2019-02-14T14:27:22+00:00
- What is Total Internal Reflection (TIR)? 2019-02-14T13:56:37+00:00
- Important Features of Intellectual Property Rights (IPRs) 2019-02-09T16:13:24+00:00
- Keywords Role in Academic Research 2018-12-16T13:34:44+00:00 |
In this lesson we discussed thermal contraction and expansion and completed a lab while I did a demonstration with immersing a balloon in cold air, then warm water and observing and measuring the effects in the size of the balloon. The due dates for the lab are below. The lab can be downloaded here: Thermal Expansion Demonstration. Students may wish to refer to this chapter to complete the follow-up questions if they can’t remember from class:
Students also presented their mini-lessons on conduction, convection and radiation. This information will be on the quiz next week. If you want to read about each of these in more detail, you can download the chapters below. You also may view the videos by going into YouTube and searching Science Geeks with the subject you want to view (e.g. Science Geeks Conduction).
Due dates for Expansion/Contraction Lab AND Quiz:
- Class 70: Monday, June 15th
- Class 71: Friday, June 12th
- Split 7s: Wednesday, June 10th
In Lesson 1 we learned that heat is the transfer of energy. Today students were given an assignment to study the three ways that heat is transferred: conduction, convection and radiation. Students researched their information in groups and prepared a poster and information for sheet for students in preparation for teaching a short mini-lesson on their topic for next science class.
Assignment expectations were distributed in class to every student this lesson. I do not have this in electronic fomat (my computer erased it when it shut down unexpectedly!) so if you need another copy, please drop by the science room! Assessment rubric here: Science Assignments Rubric – Group Work
We introduced this lesson with an overview from Bill Nye the Science Guy! Students completed a T/F quiz.
In our introductory lesson we defined heat, temperature, kinetic energy and thermal energy. Students were also given a terminology page that they are responsible for completing as we progress through the unit.
You can download Lesson 1 here: Lesson 1 Explaining Hot and Cold
Science Vocabulary sheet: Heat Vocabulary Sheet
Here is Bill!:
Yes, we are moving onto our next unit! Unfortunately without our computers we are unable to do our last period of research for the blog assignment AND with losing last week for lessons due to the year-end trip, we getting short on time!
In this last unit, students will evaluated on two assignments (time given in class for research and working on it, though if not finished it must be completed before the next class!), a lab, two quizzes and as always, a big emphasis on class participation! As always, follow the blog to make sure you are top of things!
Many students ask me during this lesson, “What does this have to do with science?”. Science links with many other areas in life! Though we have been focusing on buildings and bridges in this unit, remember a structure is something put together with parts for a purpose, therefore any product is a structure! Engineers must consider many factors when making new products and this lesson briefly takes a look at the design, manufacturing and sales process.
No assignment this week (the class did a short group/sharing activity instead).
You can download the PowerPoint here: Lesson 5 Product Development
Student notes organizer: Lesson 5 Product Development Process Student Organizer
We also watched a video and had discussion about the manufacturing process:
This week students will be completing the testing of their bridges and submitting their lab package and writing their mid- unit quiz. Review has been done in class, but students are reminded to review all new vocabulary (listed under each lesson on this blog and their notes).
Reminder that the Bridge Building Lab is due this week!
Students will have two periods to design and build a bridge. All bridges will be tested upon completion.
Please find attached the lab assignment, along with the rubrics. Please note: The design, response and reflection portion constitute a big part of the mark for this lab! It is important that all sections are completed!
Have fun! 🙂
Click to download: Straw Bridge Assignment |
What causes a rainbow?
See this page in: Dutch
The technical details of rainbow formation were first analyzed by Isaac Newton in 1665. His brilliant optics work concerning reflection and refraction certainly does not detract from the beauty and promise of the rainbow. On the contrary, Newton's scientific insights show the marvelous complexity of creation. The rainbow is a gracious pledge that God will not destroy the earth a second time with a worldwide flood (Genesis 9:11-17):
A rainbow occurs when raindrops and sunshine cross paths. Sunlight consists of all the colors of light, which add together to make white illumination. When sunlight enters water drops, it reflects off their inside surfaces. While passing through the droplets, the light also separates into its component colors, which is similar to the effect of a glass prism. Each falling water drop actually flashes its colors to the observer for just an instant, before another drop takes its place.
A rainbow is usually seen in the opposite direction in the sky from the sun. The rainbow light is reflected to the eye at an angle of 42 degrees to the original ray of sunlight. The bow shape is actually part of a cone of light that is cut off by the horizon. If you travel toward the end of a rainbow, it will move ahead of you, maintaining its shape. Thus, there is no real end to a rainbow, and no pot of gold waiting there. Because the 42 degree angle is measured from each individual observer's eye, no two people see exactly the same rainbow. Every person is at the center of his or her own particular cone of colored light. From the high vantage point of a mountaintop or an airplane a complete circle of rainbow light sometimes can be seen.
The bright, primary rainbow has red on the outer edge and blue within. Higher in the sky there is always another, dimmer rainbow with the order of colors reversed. This secondary rainbow results from additional reflection of sunlight through the raindrops. It is most visible when there are dark clouds behind it. Look for the second bow high in the sky the next time rainbow colors appear. Some observers have even reported seeing third and fourth rainbows above the first two.
Copyright © 1997, Films for Christ, All Rights Reserved - except as noted on attached “Usage and Copyright” page that grants ChristianAnswers.Net users generous rights for putting this page to work in their homes, personal witnessing, churches and schools. |
Motor disabilities can range in severity depending on the level of movement that the individual maintains. While some motor disabilities are due to a traumatic event such as a brain injury or the loss of a limb, others are the result of a disease or congenital condition, including muscular dystrophy, Parkinson's disease, ALS, and cerebral palsy.
Despite the wide range of motor disabilities that might impact an individual's ability to use a computer and access the Internet, there are a few assistive technologies and devices that are frequently employed. Which device is most useful depends on the severity of the individual's disability, as well as their personal preference and what they find most helpful.
For individuals with limited movement, tools like mouth sticks or headwands may be useful. Mouthsticks tend to be inexpensive as well as being easy to use for most individuals, making them quite common.
These devices might be used in conjunction with adaptive keyboards, keyboard auto-completion software, or oversized ball mice.
If movement is even more limited, individuals may chose to use eye-tracking software to allow them to control mouse movement by following the movement of their eyes, or voice recognition software.
To make sure that content can be navigated easily by people using these devices or others, it's important to make sure that a user's movement on a website can be controlled by using a computer keyboard (either the tab key, arrows, or both). |
Why Lesson Planet?
Students read "Goodnight Moon" by Margaret Wise Brown and then identify household objects around their room. In this vocabulary lesson plan, students learn the names that go with items around their room and practice by reciting the words out loud.
What Members Say
- Raven J., Social Studies Teacher
- Phoenix, AZ |
Protein S is one of the many vital proteins in the human body. It works to control the blood clotting process. The blood’s ability to clot is very important. It helps prevent excessive blood loss when an injury occurs. However, a blood clot in an artery or a vein (called thrombosis) can be extremely dangerous.
The body contains coagulants and anticoagulants. Coagulants encourage clotting, while anticoagulants help prevent it. Protein S is an anticoagulant. If there is not enough of it, it increases the chances that a harmful type of blood clot will form. The correct amount of protein S is needed to ensure the blood clotting process functions properly.
A doctor will usually order a protein S measurement to find out whether the protein is present in the right levels.
The most common reasons that a doctor may order a protein S measurement are:
In some instances, a protein S deficiency is inherited. Some people are simply born with a shortage of this particular anticoagulant. A doctor may order testing if a patient has one or more close family members with a history of dangerous blood clots or a known deficiency of protein S.
Blood Clot Incidents
If a person develops a blood clot in a vein or an artery (thrombosis), a protein S measurement will often be performed. This can help doctors determine the cause of the thrombosis. Clots associated with a lack of protein S tend to form in veins.
Some causes of low protein S levels include:
- taking prescription anticoagulants such as warfarin and some other types of medicines
- liver disease
- vitamin K deficiency
For most people with a protein S deficiency, a potentially dangerous blood clot is often the first sign that something is wrong. The clot most often appears in the leg or chest area and there are usually no symptoms leading up to the event.
It is also important to note that a protein S deficiency does not always mean a person will develop thrombosis. People affected may go through their entire lives without any problems.
Your doctor will evaluate your medical history and your medication use before the test to decide when the test should be done and if you need to do anything to prepare. Two considerations before testing include:
- The test should not be done until at least 10 days after a thrombosis event.
- You will need to stop taking anticoagulants for a minimum of two weeks to ensure accurate results.
You will need to provide a blood sample for a protein S measurement. A needle will be inserted into a vein and blood will be collected in a vial. You may experience some minor pain as the needle is being inserted and some soreness afterward. Serious complications, however, are rare.
Your doctor will interpret your results and discuss with you any diagnosis or abnormalities, if present. Results are usually presented in terms of percent inhibition. These percentage values should usually fall between about 60 and 150. There might be some slight differences among testing facilities. High levels of protein S are not typically cause for concern, whereas low levels indicate a deficiency in protein S, which could signal an infection or a likelihood of blood clots.
If a protein S deficiency does exist, the follow-up will depend on the cause. Sometimes, there is another condition causing protein S levels to be lower than they should be. In these cases, addressing the underlying condition is the logical next step.
For those with an inherited deficiency, the focus will usually be on reducing or eliminating risk factors for clots. Simple lifestyle changes such as quitting smoking, exercising often, and avoiding the use of birth control pills are just some ways to lessen the chances that a lower than optimal amount of protein S will lead to a potentially dangerous clot. |
Microcirculation: The circulation of the BLOOD through the MICROVASCULAR NETWORK.Arterioles: The smallest divisions of the arteries located between the muscular arteries and the capillaries.Mouth FloorCapillaries: The minute vessels that connect the arterioles and venules.Laser-Doppler Flowmetry: A method of non-invasive, continuous measurement of MICROCIRCULATION. The technique is based on the values of the DOPPLER EFFECT of low-power laser light scattered randomly by static structures and moving tissue particulates.Venules: The minute vessels that collect blood from the capillary plexuses and join together to form veins.Cheek: The part of the face that is below the eye and to the side of the nose and mouth.Microvessels: The finer blood vessels of the vasculature that are generally less than 100 microns in internal diameter.Microscopic Angioscopy: The noninvasive microscopic examination of the microcirculation, commonly done in the nailbed or conjunctiva. In addition to the capillaries themselves, observations can be made of passing blood cells or intravenously injected substances. This is not the same as endoscopic examination of blood vessels (ANGIOSCOPY).Microscopy, Video: Microscopy in which television cameras are used to brighten magnified images that are otherwise too dark to be seen with the naked eye. It is used frequently in TELEPATHOLOGY.Leukocytes: White blood cells. These include granular leukocytes (BASOPHILS; EOSINOPHILS; and NEUTROPHILS) as well as non-granular leukocytes (LYMPHOCYTES and MONOCYTES).Blood Flow Velocity: A value equal to the total volume flow divided by the cross-sectional area of the vascular bed.Liver Circulation: The circulation of BLOOD through the LIVER.Regional Blood Flow: The flow of BLOOD through or around an organ or region of the body.Vasodilation: The physiological widening of BLOOD VESSELS by relaxing the underlying VASCULAR SMOOTH MUSCLE.Splanchnic Circulation: The circulation of blood through the BLOOD VESSELS supplying the abdominal VISCERA.Coronary Circulation: The circulation of blood through the CORONARY VESSELS of the HEART.Acridine Orange: A cationic cytochemical stain specific for cell nuclei, especially DNA. It is used as a supravital stain and in fluorescence cytochemistry. It may cause mutations in microorganisms.Retinal Vessels: The blood vessels which supply and drain the RETINA.Endothelium, Vascular: Single pavement layer of cells which line the luminal surface of the entire vascular system and regulate the transport of macromolecules and blood components.Skin: The outer covering of the body that protects it from the environment. It is composed of the DERMIS and the EPIDERMIS.Vasodilator Agents: Drugs used to cause dilation of the blood vessels.Mesocricetus: A genus of the family Muridae having three species. The present domesticated strains were developed from individuals brought from Syria. They are widely used in biomedical research.Leukocyte Rolling: Movement of tethered, spherical LEUKOCYTES along the endothelial surface of the microvasculature. The tethering and rolling involves interaction with SELECTINS and other adhesion molecules in both the ENDOTHELIUM and leukocyte. The rolling leukocyte then becomes activated by CHEMOKINES, flattens out, and firmly adheres to the endothelial surface in preparation for transmigration through the interendothelial cell junction. (From Abbas, Cellular and Molecular Immunology, 3rd ed)Pia Mater: The innermost layer of the three meninges covering the brain and spinal cord. It is the fine vascular membrane that lies under the ARACHNOID and the DURA MATER.Vasoconstriction: The physiological narrowing of BLOOD VESSELS by contraction of the VASCULAR SMOOTH MUSCLE.Fluorescein-5-isothiocyanate: Fluorescent probe capable of being conjugated to tissue and proteins. It is used as a label in fluorescent antibody staining procedures as well as protein- and amino acid-binding techniques.Capillary Permeability: The property of blood capillary ENDOTHELIUM that allows for the selective exchange of substances between the blood and surrounding tissues and through membranous barriers such as the BLOOD-AIR BARRIER; BLOOD-AQUEOUS BARRIER; BLOOD-BRAIN BARRIER; BLOOD-NERVE BARRIER; BLOOD-RETINAL BARRIER; and BLOOD-TESTIS BARRIER. Small lipid-soluble molecules such as carbon dioxide and oxygen move freely by diffusion. Water and water-soluble molecules cannot pass through the endothelial walls and are dependent on microscopic pores. These pores show narrow areas (TIGHT JUNCTIONS) which may limit large molecule movement.Hemodynamics: The movement and the forces involved in the movement of the blood through the CARDIOVASCULAR SYSTEM.Cerebrovascular Circulation: The circulation of blood through the BLOOD VESSELS of the BRAIN.Hemorheology: The deformation and flow behavior of BLOOD and its elements i.e., PLASMA; ERYTHROCYTES; WHITE BLOOD CELLS; and BLOOD PLATELETS.Mesentery: A layer of the peritoneum which attaches the abdominal viscera to the ABDOMINAL WALL and conveys their blood vessels and nerves.Nails: The thin, horny plates that cover the dorsal surfaces of the distal phalanges of the fingers and toes of primates.Corrosion Casting: A tissue preparation technique that involves the injecting of plastic (acrylates) into blood vessels or other hollow viscera and treating the tissue with a caustic substance. This results in a negative copy or a solid replica of the enclosed space of the tissue that is ready for viewing under a scanning electron microscope.Reperfusion Injury: Adverse functional, metabolic, or structural changes in ischemic tissues resulting from the restoration of blood flow to the tissue (REPERFUSION), including swelling; HEMORRHAGE; NECROSIS; and damage from FREE RADICALS. The most common instance is MYOCARDIAL REPERFUSION INJURY.Diagnostic Techniques, Cardiovascular: Methods and procedures for the diagnosis of diseases or dysfunction of the cardiovascular system or its organs or demonstration of their physiological processes.Renal Circulation: The circulation of the BLOOD through the vessels of the KIDNEY.Skin Window Technique: A technique to study CELL MIGRATION in the INFLAMMATION process or during immune reactions. After an area on the skin is abraded, the movement of cells in the area is followed via microscopic observation of the exudate through a coverslip or tissue culture chamber placed over the area.Nitric Oxide: A free radical gas produced endogenously by a variety of mammalian cells, synthesized from ARGININE by NITRIC OXIDE SYNTHASE. Nitric oxide is one of the ENDOTHELIUM-DEPENDENT RELAXING FACTORS released by the vascular endothelium and mediates VASODILATION. It also inhibits platelet aggregation, induces disaggregation of aggregated platelets, and inhibits platelet adhesion to the vascular endothelium. Nitric oxide activates cytosolic GUANYLATE CYCLASE and thus elevates intracellular levels of CYCLIC GMP.Fluorophotometry: Measurement of light given off by fluorescein in order to assess the integrity of various ocular barriers. The method is used to investigate the blood-aqueous barrier, blood-retinal barrier, aqueous flow measurements, corneal endothelial permeability, and tear flow dynamics.Cell Adhesion: Adherence of cells to surfaces or to other cells.Vascular Resistance: The force that opposes the flow of BLOOD through a vascular bed. It is equal to the difference in BLOOD PRESSURE across the vascular bed divided by the CARDIAC OUTPUT.Rats, Wistar: A strain of albino rat developed at the Wistar Institute that has spread widely at other institutions. This has markedly diluted the original strain.Blood Pressure: PRESSURE of the BLOOD on the ARTERIES and other BLOOD VESSELS.Nitroprusside: A powerful vasodilator used in emergencies to lower blood pressure or to improve cardiac function. It is also an indicator for free sulfhydryl groups in proteins.Hyperemia: The presence of an increased amount of blood in a body part or an organ leading to congestion or engorgement of blood vessels. Hyperemia can be due to increase of blood flow into the area (active or arterial), or due to obstruction of outflow of blood from the area (passive or venous).Iontophoresis: Therapeutic introduction of ions of soluble salts into tissues by means of electric current. In medical literature it is commonly used to indicate the process of increasing the penetration of drugs into surface tissues by the application of electric current. It has nothing to do with ION EXCHANGE; AIR IONIZATION nor PHONOPHORESIS, none of which requires current.Oxygen: An element with atomic symbol O, atomic number 8, and atomic weight [15.99903; 15.99977]. It is the most abundant element on earth and essential for respiration.Vasomotor System: The neural systems which act on VASCULAR SMOOTH MUSCLE to control blood vessel diameter. The major neural control is through the sympathetic nervous system.Erythrocytes: Red blood cells. Mature erythrocytes are non-nucleated, biconcave disks containing HEMOGLOBIN whose function is to transport OXYGEN.Blood Viscosity: The internal resistance of the BLOOD to shear forces. The in vitro measure of whole blood viscosity is of limited clinical utility because it bears little relationship to the actual viscosity within the circulation, but an increase in the viscosity of circulating blood can contribute to morbidity in patients suffering from disorders such as SICKLE CELL ANEMIA and POLYCYTHEMIA.Microscopy: The use of instrumentation and techniques for visualizing material and details that cannot be seen by the unaided eye. It is usually done by enlarging images, transmitted by light or electron beams, with optical or magnetic lenses that magnify the entire image field. With scanning microscopy, images are generated by collecting output from the specimen in a point-by-point fashion, on a magnified scale, as it is scanned by a narrow beam of light or electrons, a laser, a conductive probe, or a topographical probe.Rheology: The study of the deformation and flow of matter, usually liquids or fluids, and of the plastic flow of solids. The concept covers consistency, dilatancy, liquefaction, resistance to flow, shearing, thixotrophy, and VISCOSITY.Models, Cardiovascular: Theoretical representations that simulate the behavior or activity of the cardiovascular system, processes, or phenomena; includes the use of mathematical equations, computers and other electronic equipment.Rats, Sprague-Dawley: A strain of albino rat used widely for experimental purposes because of its calmness and ease of handling. It was developed by the Sprague-Dawley Animal Company.Dextrans: A group of glucose polymers made by certain bacteria. Dextrans are used therapeutically as plasma volume expanders and anticoagulants. They are also commonly used in biological experimentation and in industry for a wide variety of purposes.Microscopy, Fluorescence: Microscopy of specimens stained with fluorescent dye (usually fluorescein isothiocyanate) or of naturally fluorescent materials, which emit light when exposed to ultraviolet or blue light. Immunofluorescence microscopy utilizes antibodies that are labeled with fluorescent dye.Vasoconstrictor Agents: Drugs used to cause constriction of the blood vessels.Acetylcholine: A neurotransmitter found at neuromuscular junctions, autonomic ganglia, parasympathetic effector junctions, a subset of sympathetic effector junctions, and at many sites in the central nervous system.Hydroxyethyl Starch Derivatives: Starches that have been chemically modified so that a percentage of OH groups are substituted with 2-hydroxyethyl ether groups.Endotoxemia: A condition characterized by the presence of ENDOTOXINS in the blood. On lysis, the outer cell wall of gram-negative bacteria enters the systemic circulation and initiates a pathophysiologic cascade of pro-inflammatory mediators.Coronary Vessels: The veins and arteries of the HEART.P-Selectin: Cell adhesion molecule and CD antigen that mediates the adhesion of neutrophils and monocytes to activated platelets and endothelial cells.Ischemia: A hypoperfusion of the BLOOD through an organ or tissue caused by a PATHOLOGIC CONSTRICTION or obstruction of its BLOOD VESSELS, or an absence of BLOOD CIRCULATION.Hematocrit: The volume of packed RED BLOOD CELLS in a blood specimen. The volume is measured by centrifugation in a tube with graduated markings, or with automated blood cell counters. It is an indicator of erythrocyte status in disease. For example, ANEMIA shows a low value; POLYCYTHEMIA, a high value.Microscopy, Polarization: Microscopy using polarized light in which phenomena due to the preferential orientation of optical properties with respect to the vibration plane of the polarized light are made visible and correlated parameters are made measurable.Erythrocyte Deformability: Ability of ERYTHROCYTES to change shape as they pass through narrow spaces, such as the microvasculature.Blood Gas Monitoring, Transcutaneous: The noninvasive measurement or determination of the partial pressure (tension) of oxygen and/or carbon dioxide locally in the capillaries of a tissue by the application to the skin of a special set of electrodes. These electrodes contain photoelectric sensors capable of picking up the specific wavelengths of radiation emitted by oxygenated versus reduced hemoglobin.Disease Models, Animal: Naturally occurring or experimentally induced animal diseases with pathological processes sufficiently similar to those of human diseases. They are used as study models for human diseases.Time Factors: Elements of limited time intervals, contributing to particular results or situations.Video Recording: The storing or preserving of video signals for television to be played back later via a transmitter or receiver. Recordings may be made on magnetic tape or discs (VIDEODISC RECORDING).Pancreatitis: INFLAMMATION of the PANCREAS. Pancreatitis is classified as acute unless there are computed tomographic or endoscopic retrograde cholangiopancreatographic findings of CHRONIC PANCREATITIS (International Symposium on Acute Pancreatitis, Atlanta, 1992). The two most common forms of acute pancreatitis are ALCOHOLIC PANCREATITIS and gallstone pancreatitis.Muscle, Skeletal: A subtype of striated muscle, attached by TENDONS to the SKELETON. Skeletal muscles are innervated and their movement can be consciously controlled. They are also called voluntary muscles.Tongue DiseasesForearm: Part of the arm in humans and primates extending from the ELBOW to the WRIST.Adenosine: A nucleoside that is composed of ADENINE and D-RIBOSE. Adenosine or adenosine derivatives play many important biological roles in addition to being components of DNA and RNA. Adenosine itself is a neurotransmitter.Bradykinin: A nonapeptide messenger that is enzymatically produced from KALLIDIN in the blood where it is a potent but short-lived agent of arteriolar dilation and increased capillary permeability. Bradykinin is also released from MAST CELLS during asthma attacks, from gut walls as a gastrointestinal vasodilator, from damaged tissues as a pain signal, and may be a neurotransmitter.Hemodilution: Reduction of blood viscosity usually by the addition of cell free solutions. Used clinically (1) in states of impaired microcirculation, (2) for replacement of intraoperative blood loss without homologous blood transfusion, and (3) in cardiopulmonary bypass and hypothermia.Cricetinae: A subfamily in the family MURIDAE, comprising the hamsters. Four of the more common genera are Cricetus, CRICETULUS; MESOCRICETUS; and PHODOPUS.Perfusion: Treatment process involving the injection of fluid into an organ or tissue.Mesenteric Veins: Veins which return blood from the intestines; the inferior mesenteric vein empties into the splenic vein, the superior mesenteric vein joins the splenic vein to form the portal vein.Pancreas: A nodular organ in the ABDOMEN that contains a mixture of ENDOCRINE GLANDS and EXOCRINE GLANDS. The small endocrine portion consists of the ISLETS OF LANGERHANS secreting a number of hormones into the blood stream. The large exocrine portion (EXOCRINE PANCREAS) is a compound acinar gland that secretes several digestive enzymes into the pancreatic ductal system that empties into the DUODENUM.Resuscitation: The restoration to life or consciousness of one apparently dead. (Dorland, 27th ed)Capillary Leak Syndrome: A condition characterized by recurring episodes of fluid leaking from capillaries into extra-vascular compartments causing hematocrit to rise precipitously. If not treated, generalized vascular leak can lead to generalized EDEMA; SHOCK; cardiovascular collapse; and MULTIPLE ORGAN FAILURE.Ophthalmoscopes: Devices for examining the interior of the eye, permitting the clear visualization of the structures of the eye at any depth. (UMDNS, 1999)Microspheres: Small uniformly-sized spherical particles, of micrometer dimensions, frequently labeled with radioisotopes or various reagents acting as tags or markers.Contrast Media: Substances used to allow enhanced visualization of tissues.Pulmonary Circulation: The circulation of the BLOOD through the LUNGS.Nitroarginine: An inhibitor of nitric oxide synthetase which has been shown to prevent glutamate toxicity. Nitroarginine has been experimentally tested for its ability to prevent ammonia toxicity and ammonia-induced alterations in brain energy and ammonia metabolites. (Neurochem Res 1995:200(4):451-6)Vivisection: The cutting of or surgical operation on a living animal, usually for physiological or pathological investigation. (from Merriam-Webster's Collegiate Dict, 10th ed)Transillumination: Passage of light through body tissues or cavities for examination of internal structures.Ear: The hearing and equilibrium system of the body. It consists of three parts: the EXTERNAL EAR, the MIDDLE EAR, and the INNER EAR. Sound waves are transmitted through this organ where vibration is transduced to nerve signals that pass through the ACOUSTIC NERVE to the CENTRAL NERVOUS SYSTEM. The inner ear also contains the vestibular organ that maintains equilibrium by transducing signals to the VESTIBULAR NERVE.Sepsis: Systemic inflammatory response syndrome with a proven or suspected infectious etiology. When sepsis is associated with organ dysfunction distant from the site of infection, it is called severe sepsis. When sepsis is accompanied by HYPOTENSION despite adequate fluid infusion, it is called SEPTIC SHOCK.Oxygen Consumption: The rate at which oxygen is used by a tissue; microliters of oxygen STPD used per milligram of tissue per hour; the rate at which oxygen enters the blood from alveolar gas, equal in the steady state to the consumption of oxygen by tissue metabolism throughout the body. (Stedman, 25th ed, p346)Biological Factors: Endogenously-synthesized compounds that influence biological processes not otherwise classified under ENZYMES; HORMONES or HORMONE ANTAGONISTS.Image Processing, Computer-Assisted: A technique of inputting two-dimensional images into a computer and then enhancing or analyzing the imagery into a form that is more useful to the human observer.Shock, Septic: Sepsis associated with HYPOTENSION or hypoperfusion despite adequate fluid resuscitation. Perfusion abnormalities may include, but are not limited to LACTIC ACIDOSIS; OLIGURIA; or acute alteration in mental status.Blood Circulation: The movement of the BLOOD as it is pumped through the CARDIOVASCULAR SYSTEM.Arteries: The vessels carrying blood away from the heart.Optical Processes: Behavior of LIGHT and its interactions with itself and materials.Ceruletide: A specific decapeptide obtained from the skin of Hila caerulea, an Australian amphibian. Caerulein is similar in action and composition to CHOLECYSTOKININ. It stimulates gastric, biliary, and pancreatic secretion; and certain smooth muscle. It is used in paralytic ileus and as diagnostic aid in pancreatic malfunction.Mice, Inbred C57BLEndothelial Cells: Highly specialized EPITHELIAL CELLS that line the HEART; BLOOD VESSELS; and lymph vessels, forming the ENDOTHELIUM. They are polygonal in shape and joined together by TIGHT JUNCTIONS. The tight junctions allow for variable permeability to specific macromolecules that are transported across the endothelial layer.Papaverine: An alkaloid found in opium but not closely related to the other opium alkaloids in its structure or pharmacological actions. It is a direct-acting smooth muscle relaxant used in the treatment of impotence and as a vasodilator, especially for cerebral vasodilation. The mechanism of its pharmacological actions is not clear, but it apparently can inhibit phosphodiesterases and it may have direct actions on calcium channels.Plasma Substitutes: Any liquid used to replace blood plasma, usually a saline solution, often with serum albumins, dextrans or other preparations. These substances do not enhance the oxygen- carrying capacity of blood, but merely replace the volume. They are also used to treat dehydration.Swine: Any of various animals that constitute the family Suidae and comprise stout-bodied, short-legged omnivorous mammals with thick skin, usually covered with coarse bristles, a rather long mobile snout, and small tail. Included are the genera Babyrousa, Phacochoerus (wart hogs), and Sus, the latter containing the domestic pig (see SUS SCROFA).Nitric Oxide Synthase: An NADPH-dependent enzyme that catalyzes the conversion of L-ARGININE and OXYGEN to produce CITRULLINE and NITRIC OXIDE.Models, Animal: Non-human animals, selected because of specific characteristics, for use in experimental research, teaching, or testing.Blood Vessels: Any of the tubular vessels conveying the blood (arteries, arterioles, capillaries, venules, and veins).Liver: A large lobed glandular organ in the abdomen of vertebrates that is responsible for detoxification, metabolism, synthesis and storage of various substances.Blood Substitutes: Substances that are used in place of blood, for example, as an alternative to BLOOD TRANSFUSIONS after blood loss to restore BLOOD VOLUME and oxygen-carrying capacity to the blood circulation, or to perfuse isolated organs.Ophthalmoscopy: Examination of the interior of the eye with an ophthalmoscope.Erythrocyte Aggregation: The formation of clumps of RED BLOOD CELLS under low or non-flow conditions, resulting from the attraction forces between the red blood cells. The cells adhere to each other in rouleaux aggregates. Slight mechanical force, such as occurs in the circulation, is enough to disperse these aggregates. Stronger or weaker than normal aggregation may result from a variety of effects in the ERYTHROCYTE MEMBRANE or in BLOOD PLASMA. The degree of aggregation is affected by ERYTHROCYTE DEFORMABILITY, erythrocyte membrane sialylation, masking of negative surface charge by plasma proteins, etc. BLOOD VISCOSITY and the ERYTHROCYTE SEDIMENTATION RATE are affected by the amount of erythrocyte aggregation and are parameters used to measure the aggregation.Edema: Abnormal fluid accumulation in TISSUES or body cavities. Most cases of edema are present under the SKIN in SUBCUTANEOUS TISSUE.Diagnostic Imaging: Any visual display of structural or functional patterns of organs or tissues for diagnostic evaluation. It includes measuring physiologic and metabolic responses to physical and chemical stimuli, as well as ultramicroscopy.Enzyme Inhibitors: Compounds or agents that combine with an enzyme in such a manner as to prevent the normal substrate-enzyme combination and the catalytic reaction.Organ Preservation: The process by which organs are kept viable outside of the organism from which they were removed (i.e., kept from decay by means of a chemical agent, cooling, or a fluid substitute that mimics the natural state within the organism).Hydronephrosis: Abnormal enlargement or swelling of a KIDNEY due to dilation of the KIDNEY CALICES and the KIDNEY PELVIS. It is often associated with obstruction of the URETER or chronic kidney diseases that prevents normal drainage of urine into the URINARY BLADDER.Fluorescein Angiography: Visualization of a vascular system after intravenous injection of a fluorescein solution. The images may be photographed or televised. It is used especially in studying the retinal and uveal vasculature.Reperfusion: Restoration of blood supply to tissue which is ischemic due to decrease in normal blood supply. The decrease may result from any source including atherosclerotic obstruction, narrowing of the artery, or surgical clamping. It is primarily a procedure for treating infarction or other ischemia, by enabling viable ischemic tissue to recover, thus limiting further necrosis. However, it is thought that reperfusion can itself further damage the ischemic tissue, causing REPERFUSION INJURY.Lasers: An optical source that emits photons in a coherent beam. Light Amplification by Stimulated Emission of Radiation (LASER) is brought about using devices that transform light of varying frequencies into a single intense, nearly nondivergent beam of monochromatic radiation. Lasers operate in the infrared, visible, ultraviolet, or X-ray regions of the spectrum.Histamine: An amine derived by enzymatic decarboxylation of HISTIDINE. It is a powerful stimulant of gastric secretion, a constrictor of bronchial smooth muscle, a vasodilator, and also a centrally acting neurotransmitter.Inflammation: A pathological process characterized by injury or destruction of tissues caused by a variety of cytologic and chemical reactions. It is usually manifested by typical signs of pain, heat, redness, swelling, and loss of function.Skin Temperature: The TEMPERATURE at the outer surface of the body.Erythrocytes, Abnormal: Oxygen-carrying RED BLOOD CELLS in mammalian blood that are abnormal in structure or function.omega-N-Methylarginine: A competitive inhibitor of nitric oxide synthetase.Dose-Response Relationship, Drug: The relationship between the dose of an administered drug and the response of the organism to the drug.Partial Pressure: The pressure that would be exerted by one component of a mixture of gases if it were present alone in a container. (From McGraw-Hill Dictionary of Scientific and Technical Terms, 6th ed)Blood Volume: Volume of circulating BLOOD. It is the sum of the PLASMA VOLUME and ERYTHROCYTE VOLUME.Leukocyte Count: The number of WHITE BLOOD CELLS per unit volume in venous BLOOD. A differential leukocyte count measures the relative numbers of the different types of white cells.Endothelin-1: A 21-amino acid peptide produced in a variety of tissues including endothelial and vascular smooth-muscle cells, neurons and astrocytes in the central nervous system, and endometrial cells. It acts as a modulator of vasomotor tone, cell proliferation, and hormone production. (N Eng J Med 1995;333(6):356-63)NG-Nitroarginine Methyl Ester: A non-selective inhibitor of nitric oxide synthase. It has been used experimentally to induce hypertension.Colloids: Two-phase systems in which one is uniformly dispersed in another as particles small enough so they cannot be filtered or will not settle out. The dispersing or continuous phase or medium envelops the particles of the discontinuous phase. All three states of matter can form colloids among each other.Acepromazine: A phenothiazine that is used in the treatment of PSYCHOSES.Acute Disease: Disease having a short and relatively severe course.Rabbits: The species Oryctolagus cuniculus, in the family Leporidae, order LAGOMORPHA. Rabbits are born in burrows, furless, and with eyes and ears closed. In contrast with HARES, rabbits have 22 chromosome pairs.Blood-Brain Barrier: Specialized non-fenestrated tightly-joined ENDOTHELIAL CELLS with TIGHT JUNCTIONS that form a transport barrier for certain substances between the cerebral capillaries and the BRAIN tissue.Mouth Mucosa: Lining of the ORAL CAVITY, including mucosa on the GUMS; the PALATE; the LIP; the CHEEK; floor of the mouth; and other structures. The mucosa is generally a nonkeratinized stratified squamous EPITHELIUM covering muscle, bone, or glands but can show varying degree of keratinization at specific locations.Gastric Mucosa: Lining of the STOMACH, consisting of an inner EPITHELIUM, a middle LAMINA PROPRIA, and an outer MUSCULARIS MUCOSAE. The surface cells produce MUCUS that protects the stomach from attack by digestive acid and enzymes. When the epithelium invaginates into the LAMINA PROPRIA at various region of the stomach (CARDIA; GASTRIC FUNDUS; and PYLORUS), different tubular gastric glands are formed. These glands consist of cells that secrete mucus, enzymes, HYDROCHLORIC ACID, or hormones.Capillary Resistance: The vascular resistance to the flow of BLOOD through the CAPILLARIES portions of the peripheral vascular bed.Perfusion Imaging: The creation and display of functional images showing where the blood flow reaches by following the distribution of tracers injected into the blood stream.Hemoglobins: The oxygen-carrying proteins of ERYTHROCYTES. They are found in all vertebrates and some invertebrates. The number of globin subunits in the hemoglobin quaternary structure differs between species. Structures range from monomeric to a variety of multimeric arrangements.Albumins: Water-soluble proteins found in egg whites, blood, lymph, and other tissues and fluids. They coagulate upon heating.Annexin A1: Protein of the annexin family exhibiting lipid interaction and steroid-inducibility.Microtechnology: Manufacturing technology for making microscopic devices in the micrometer range (typically 1-100 micrometers), such as integrated circuits or MEMS. The process usually involves replication and parallel fabrication of hundreds or millions of identical structures using various thin film deposition techniques and carried out in environmentally-controlled clean rooms.Varicose Ulcer: Skin breakdown or ulceration caused by VARICOSE VEINS in which there is too much hydrostatic pressure in the superficial venous system of the leg. Venous hypertension leads to increased pressure in the capillary bed, transudation of fluid and proteins into the interstitial space, altering blood flow and supply of nutrients to the skin and subcutaneous tissues, and eventual ulceration.Shock, Hemorrhagic: Acute hemorrhage or excessive fluid loss resulting in HYPOVOLEMIA.Cerebral Veins: Veins draining the cerebrum.Muscle Tonus: The state of activity or tension of a muscle beyond that related to its physical properties, that is, its active resistance to stretch. In skeletal muscle, tonus is dependent upon efferent innervation. (Stedman, 25th ed)Brain: The part of CENTRAL NERVOUS SYSTEM that is contained within the skull (CRANIUM). Arising from the NEURAL TUBE, the embryonic brain is comprised of three major parts including PROSENCEPHALON (the forebrain); MESENCEPHALON (the midbrain); and RHOMBENCEPHALON (the hindbrain). The developed brain consists of CEREBRUM; CEREBELLUM; and other structures in the BRAIN STEM. |
WELCOME BACK TO SCHOOL FOR THE WINTER/SPRING 2016!!
THE SMC STUDENT HEALTH SERVICES IS HERE TO ASSIST YOU!
Spread through mosquito bites. Most common symptoms are: rash, fever, joint pain and conjunctivitis.
For more information check out the CDC webpage: http://www.cdc.gov/zika/index.html
PLEASE VISIT http://www.cdc.gov/vaccines/schedules/hcp/adult.html
What is measles?
Measles is an infectious viral disease that occurs most often in the late winter and spring. It begins with a fever that lasts for a couple of days, followed by a cough, runny nose, and conjunctivitis (pink eye). A rash starts on the face and upper neck, spreads down the back and trunk, then extends to the arms and hands, as well as the legs and feet. After about 5 days, the rash fades the same order in which it appeared.
How can I catch measles?
Measles is highly contagious. Infected people are usually contagious from about 4 days before their rash starts to 4 days afterwards. The measles virus resides in the mucus in the nose and throat of infected people. When they sneeze or cough, droplets spray into the air and the droplets remain active and contagious on infected surfaces for up to 2 hours.
How serious is the disease?
Measles itself is unpleasant, but the complications are dangerous. Six to 20 percent of the people who get the disease will get an ear infection, diarrhea, or even pneumonia. One out of 1000 people with measles will develop inflammation of the brain, and about one out of 1000 will die.
Why is vaccination necessary?
In the decade before the measles vaccination program began, an estimated 3–4 million persons in the United States were infected each year, of whom 400–500 died, 48,000 were hospitalized, and another 1,000 developed chronic disability from measles encephalitis. Widespread use of measles vaccine has led to a greater than 99% reduction in measles cases in the United States compared with the pre-vaccine era.
However, measles is still common in other countries. The virus is highly contagious and can spread rapidly in areas where vaccination is not widespread. It is estimated that in 2006 there were 242,000 measles deaths worldwide—that equals about 663 deaths every day or 27 deaths every hour. If vaccinations were stopped, measles cases would return to pre-vaccine levels and hundreds of people would die from measles-related illnesses.
Is measles still a problem in the United States?
We still see measles among visitors to the United States and among U.S. travelers returning from other countries. The measles viruses these travelers bring into our country sometimes cause outbreaks; however, because most people in the United States have been vaccinated, these outbreaks are usually small.
In the last decade, measles vaccination in the United States has decreased the number of cases to the lowest point ever reported. Widespread use of the measles vaccine has led to a greater than 99% reduction in measles compared with the decade before the measles vaccination program began, when an estimated 3–4 million persons in the United States were infected each year, 400–500 died, 48,000 were hospitalized, and another 1,000 developed chronic disability from measles encephalitis.
If the chance of the diseases is so low, why do I need the vaccine?
It is true that vaccination has enabled us to reduce measles and most other vaccine-preventable diseases to very low levels in the United States. However, measles is still very common—even epidemic—in other parts of the world. Visitors to our country and unvaccinated U.S. travelers returning from other countries can unknowingly bring (import) measles into the United States. Since the virus is highly contagious, such imported cases can quickly spread, causing outbreaks or epidemics among unvaccinated people and under-vaccinated communities.
To protect your children, yourself, and others in the community, it is important to be vaccinated against measles. You may think your chance of getting measles is small, but the disease still exists and can still infect anyone who is not protected.
What kind of vaccine is given to prevent measles?
The MMR vaccine prevents measles and 2 other viral diseases—mumps and rubella. These 3 vaccines are safe given together. MMR is an attenuated (weakened) live virus vaccine. This means that after injection, the viruses grows and causes a harmless infection in the vaccinated person with very few, if any, symptoms. The person's immune system fights the infection caused by these weakened viruses and immunity develops which lasts throughout that person’s life.
How effective is MMR vaccine?
More than 95% of the people who receive a single dose of MMR will develop immunity to all 3 viruses. A second vaccine dose gives immunity to almost all of those who did not respond to the first dose.
Why is MMR vaccine given after the first birthday?
Most infants born in the United States will receive passive protection against measles, mumps, and rubella in the form of antibodies from their mothers. These antibodies can destroy the vaccine virus if they are present when the vaccine is given and, thus, can cause the vaccine to be ineffective. By 12 months of age, almost all infants have lost this passive protection.
What is the best age to give the second dose of MMR vaccine?
The second dose of MMR can be given at any time, as long as the child is at least 12 months old and it has been at least 28 days since the first dose. However, the second dose is usually administered before the child begins kindergarten or first grade (4-5 years of age) or before entry to middle school (11-12 years of age). The age at which the second dose is required is generally mandated by state school entry requirements.
As an adult, do I need the MMR vaccine?
You do not need the MMR vaccine if you
- had blood tests that show you are immune to measles, mumps, and rubella
- are someone born before 1957
- already had two doses of MMR or one dose of MMR plus a second dose of measles vaccine
- already had one dose of MMR and are not at high risk of measles exposure
You should get the measles vaccine if you are not among the categories listed above, and
- are a college student, trade school student, or other student beyond high school
- work in a hospital or other medical facility
- travel internationally, or are a passenger on a cruise ship
- are a woman of childbearing age
Do people who received MMR in the 1960s need to have their dose repeated?
Not necessarily. People who have documentation of receiving LIVE measles vaccine in the 1960s do not need to be revaccinated. People who were vaccinated prior to 1968 with either inactivated (killed) measles vaccine or measles vaccine of unknown type should be revaccinated with at least one dose of live attenuated measles vaccine. This recommendation is intended to protect those who may have received killed measles vaccine, which was available in 1963-1967 and was not effective.
Why are people born before 1957 exempt from receiving MMR vaccine?
People born before 1957 lived through several years of epidemic measles before the first measles vaccine was licensed. As a result, these people are very likely to have had the measles disease. Surveys suggest that 95% to 98% of those born before 1957 are immune to measles. Note: The "1957 rule" applies only to measles and mumps—it does NOT apply to rubella.
Precautions and Possible Reactions
I am 2 months pregnant. Is it safe for me to have my 15-month-old child vaccinated with the MMR vaccine?
Yes. Measles, mumps, and rubella vaccine viruses are not transmitted from the vaccinated person, so MMR does not pose a risk to a pregnant household member.
I am breast feeding my 2-month-old baby. Is it safe for me to receive the MMR vaccine?
Yes. Breast feeding does not interfere with the response to MMR vaccine, and your baby will not be affected by the vaccine through your breast milk.
My 15-month-old child was exposed to chickenpox yesterday. Is it safe for him to receive the MMR vaccine today?
Yes. Disease exposure, including chickenpox, should not delay anyone from receiving the benefits of the MMR or any other vaccine.
What is the most common reaction following MMR vaccine?
Most people have no reaction. However, 5-10 percent of the people receiving the MMR vaccine experience a low-grade fever and a mild rash.
AFFORDABLE CARE ACT
HEALTH INSURANCE MARKET PLACE:
- Available 24/7 by phone at 1 800 318 2596
- Live chat available on website or sign up to receive information via text messages.
- Covered California
WEST NILE VIRUS: FAQS
What is West Nile Virus (WNV)?
West Nile Virus is transmitted to humans and pets through the bite of an infected mosquito. Infected mosquitos become WNV carriers after they feed on infected birds. WNV is not spread by contact/touch or kissing and or breathing the same air as an infected person.
Symptoms can develop 3-14 days after being bitten by an infected mosquito. Symptoms may vary and include fever, body aches, rash, nausea, vomiting, headache and sometimes swollen lymph glands, a skin rash on the chest, stomach and back. Symptoms may last for few days. Some infected people may not develop any symptoms at all and can go undiagnosed. On rare occasions, severe cases may cause central nervous symptoms.
Mild symptoms do not require any treatment. More severe symptoms require medical intervention including hospitalization.
- Avoid areas that might be mosquito-prone at dawn and dusk.
- Wear long sleeved shirts and long pants when outdoors.
- Use insect repellent (should contain DEET) before going outdoors.
- Empty all outside standing water containers i.e. neglected swimming pools, ponds, pet water dishes, pots, birdbaths, etc.
- Repair and replace window and door screen to keep mosquitos out.
- Report dead birds and squirrels to the West Nile Virus and Dead Bird Hotline at 1 877 968 2473
- Get informed and get more information from Centre for disease control at www.cdc.gov/westnile and for California at www.westnile.ca.gov
EBOLA VIRUS INFORMATION
EVD (Ebola VIrus Disease) is transmitted through direct contact with infected secretions such as saliva, vomit, diarrhea and blood of an infected individual. Transmission can also occur via contact with tears, sweat, urine and or touching contaminated objects like needles; touching or feeding from contaminated meat products, touching infected animals, their blood or other body fluids. It is not airborne or spread by droplets eg sneezing, coughing etc Only patients who are sick with the disease or show signs of being ill with the disease (symptomatic) are contagious and can therefore transmit the virus to others via their secretions. EV incubation period is 2-21 days with most of the individuals becoming symptomatic in 8-10 days.
- Sudden fever, often as high as 103*-105*
- Intense weakness, sore throat and headache
- Profuse vomiting and diarrhea usually occur 1-2 days after the above mentioned symptoms.
- More severe symptoms can occur 24-48hours
- Bleeding from nasal and or oral cavities, with hemorrhagic skin blisters, renal failure and multisystem organ failure can aggressively progress in 3-5 days.
Supportive care and isolation is the only available treatment at this time. For more information check out the Center for Disease website at: http://www.cdc.gov/ebola. The World Health Organization Ebola information can be found at: http://www.who.int/csr/disease/ebola.
- Frequent hand washing and hand sanitizing.
- Avoid contact with blood and or body fluids/secretions generally and especially/primarily with infected individuals.
- Avoid contact with animals (bats, non human primates), infected animals (infected animals raw meat) or consuming meat product from infected animals.
- Do not handle items that may have come in contact with an infected individuals blood or body fluids/secretions.
- Do not touch the dead body of an individual who has died from EV.
- Seek medical care immediately if you develop the prior mentioned symptoms and limit your contact with others until you are seen by a Medical Doctor.
- Avoid nonessential travel to Ebola prone areas and or areas where EVD is occurring. Receive up to date information on the specified countries at http://wwwnc.cdc.gov/travel/notices
Sites to check out for more information:
HEALTH SERVICES WILL BE CLOSED ON THESE DATES:
For the year 2016
- 01-18-2015 FOR MARTIN LUTHER KING DAY
- 02-12-2015 AND 02-15-2015 PRESIDENTS DAY
- 04-13, 04-14, 04-15, 04-16, 04-17, FOR SPRING BREAK
- 05-30-2015 FOR MEMORIAL DAY
- 06-15, 6-16, 6-17 CLASSESS NOT IN SESSION
- 07-04-2015 FOR INDEPENDENCE DAY
SUMMER AND WINTER HOURS PLEASE CALL THE HEALTH OFFICE 310 434 4262
FALL SESSION HOURS TENTATIVELY WILL BE AS FOLLOWS:
- MONDAY THROUGH THURSDAY 8AM TO 4PM.
- FRIDAY 8AM TO 1PM
- STUDENT HEALTH SERVICES CLOSED ON SATURDAY AND SUNDAY.
- FOR TB (MANTOUX) TEST HOURS CALL US AT 310-434-4262 DURING BUSINESS HOURS
ANY QUESTIONS OR CONCERNS, PLEASE CALL US AT 310. 434. 4262 DURING BUSINESS HOURS
FOR ANY EMERGENCY, CALL CAMPUS POLICE AT 310. 434. 4300 |
Sedimentary Rocks are formed by the accumulation and subsequent consolidation of sediments into various types of rock. The key is the sediments. Sediments are unconsolidated material and have different origins. Ultimately, the origin of these sediments is the weathering, erosion and/or the chemical breakdown of other rocks. These "other" rocks could be igneous, metamorphic or even other sedimentary rocks. The type and size of the sediments and how they are formed will lead to the classification of the different sedimentary rocks listed below.
Biochemical sedimentary rocks are formed from organic processes that involve living organisms producing the sediments. These living organisms can be snails and clams whose discarded calcium carbonate shells can form limestone. But it also includes swamp plants whose organic debris can produce coal if conditions are right. Although the origin of the sediments are organic, most of the chemicals that the living organisms used to produce their shells or their body parts have origins from previous rocks. Thus these are sedimentary rocks, but with a biogenic intermediate so to speak.
Clastic sedimentary rocks are sometimes considered true sedimentary rocks as they are composed directly of the sediments or fragments from other rocks. These fragments are called clasts, hence the term clastic. Generally they are further defined by the size of their clasts which range from nearly boulder size in the coarsest conglomerates and breccias to very fine grain size (<0.004 mm) in shale. Often clastic rocks can have their clasts analyzed and the original source rocks can be determined if the clasts have not been moved too far from the source or have not been worked and reworked into sedimentary rock after sedimentary rock. Glacial till from a large continental glacier can be analyzed and the origin of the glacier's debris could be determined and therefore the possible path of the glacier could be determined.
Evaporative or chemical sedimentary rocks are formed from the generally inorganic deposition of chemicals, usually through evaporation of a chemical rich solution. These chemicals generally have their origin from the chemical weathering of other rocks or other sediments. The sodium and chlorine in halite (table salt) comes from the sodium and chlorine in other rocks that have dissolved over the years. Unlike clastic sedimentary rocks, the direct origin of the chemicals is rarely easy to identify. The chemicals could come from magma deep in the crust of the Earth, rocks that dissolved in the ocean billions of years ago or from an outcrop in the hillside next to a playa lake. Sometimes the origin can be figured out and sometimes there is no way to know the originating source.
No matter what type of sedimentary rock, water is almost always a key component. The only real exception to this is desert wind blown sediments. All other rock types involve water is some way and generally in an important way. The biochemical rocks come from water born organisms. Clastic rocks are usually water transported, sorted and deposited. Evaporative rocks are of course derived from chemicals dissolved in water. Where there is water, there are sedimentary rocks being formed. Sedimentary geologists must have an understanding of hydrology in order to understand their subjects.
Another common factor to sedimentary rocks is that they originate on the surface of the Earth, unlike most igneous and metamorphic rocks which originate in the interior of the Earth's crust. Geologists can actually see many sedimentary rocks form or at least see the sediments that will become sedimentary rocks prior to their lithification (literally "turning into stone").
The lithification of the sediments is usually accomplished by a cementing agent. Once the sediments are no longer loose sediments, but cemented together grains or crystals, they become a rock. What happens to sediments from this stage on is called diagenesis.
Diagenesis is important to study in sedimentary rocks. It includes the study of the compaction of the rock, physical conditions, chemical alterations and biological interactions. Cementation is usually the first aspect of diagenesis, but cementation can be episodic, reversed and recementation can occur. Compression of the rocks can change the banding and increase chemical alterations. Chemicals can leave or enter a rock through pore space waters, and minerals can crystallize, or dissolve, or become hydrated or oxidized or chemically changed in other ways. Temperature increases can alter the characteristics of the rock as well. If too much heat and pressure occur during diagenesis then the rock may wander into the regime of metamorphism. At times geologist will argue over the boundary between a highly diagenetically altered sedimentary rock and a low grade metamorphic rock. Diagenesis is extremely important in understanding the history and the resulting effects on the characteristics of sedimentary rocks.
Geodes are another type of
sedimentary rock. They form when minerals in aqueous solution
crystallize on the interiors of cavities. This may occur in typical
sedimentary formations, or as ground water penetrates igneous
See Steve's video interview about asteroid capture at Moonandback:|
See Steve's blog
On The Future
A Project Plan for Space Based Solar Power |
Introduction to Moment Magnitude
Cal Poly, San Luis Obispo
This activity was selected for the On the Cutting Edge Reviewed Teaching Collection
This activity has received positive reviews in a peer review process involving five review categories. The five categories included in the process are
- Scientific Accuracy
- Alignment of Learning Goals, Activities, and Assessments
- Pedagogic Effectiveness
- Robustness (usability and dependability of all components)
- Completeness of the ActivitySheet web page
For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html.
This page first made public: Oct 26, 2011
Used this activity? Share your experiences and modifications
This activity reviews body-wave magnitude, and takes a closer look at its merits. Then moment magnitude is defined and contrasted with body-wave magnitude. The 2004 Parkfield earthquake is used to illustrate moment magnitude.
The figures are adapted from other sources.
Undergraduate, lower-division general education level course on earthquakes. No geology prerequisite.
Skills and concepts that students must have mastered
Fault types, elastic rebound theory, local magnitude, duration magnitude, body-wave magnitude
How the activity is situated in the course
Usually the activity is done in a Discussion meeting, which is a 1-hr meeting of half the class size at a time. Students work in small groups and answers by each group are summarized by the instructor and discussed towards the end of the meeting.
Content/concepts goals for this activity
Goal: to understand that different magnitude scales use different parts of the seismogram, and that only seismic moment measures energy and connects the earthquake to the faulting instead of the seismogram
Higher order thinking skills goals for this activity
synthesis of magnitude scales for comparison
understanding the effect of different parameter values on magnitude
Other skills goals for this activity
Working in groups.
Clear written answers to questions using appropriate terminology developed in the class.
Description of the activity/assignment
To prepare for this assignment students will have already done magnitude calculations, particularly body-wave magnitude where the concept of reading amplitude and P-wave period will have been practiced. Prior to this activity the students will have had a quiz on calculating body-wave magnitude given a seismogram.
The primary outcome of this activity is to understand the difference in what body-wave magnitude measures, compared to what moment magnitude measures and the merits of each.
Determining whether students have met the goals
Answers by each group are presented to the class and the instructor leads a discussion of the merits and/or inaccuracies of these answers. A "good" answer to each question is agreed upon and each student then has a record of these for their future review. More information about assessment tools and techniques.
Download teaching materials and tips |
Noise-Induced Hearing Loss
Warning: Your Deafness
May Be Your Fault
Hearing loss, usually associated with declining youth, can happen at any age. Men in particular are vulnerable. Findings from the American Academy of Otolaryngology-Head and Neck Surgery revealed that 13% of men age 20 and 69 suffer from noise-induced hearing loss (NIHL).
Because noise-induced hearing loss is related to volume and length of exposure time, there has been much focus on hearing loss from iPods and other MP3 players. Even some computer games exceed 110 decibels. Included among the hazards are noisy toys that children typically hold close to their susceptible ears.
According to statistics from the National Institute on Deafness and Other Communication Disorders, the number of Americans with some form of hearing disorder, over age 3, has doubled since 1971.
The American Hearing Research Foundation explains the risk of noise exposure:
“Habitual exposure to noise above 85 decibels will cause a gradual hearing loss in a significant number of individuals, and louder noises will accelerate this damage. For unprotected ears, the allowed exposure time decreases by one half for each 5 decibels increase in the average noise level. For instance, exposure is limited to eight hours per day at 90 decibels, four hours per day at 95 decibels, and two hours per day at 100 decibels. The highest permissible noise exposure for the unprotected ear is 115 decibels for 15 minutes per day. Any noise above 140 decibels is not permitted.”
To give examples of the magnitude of noise exposure in America, a lawnmower produces 90 decibels and so do most shop tools. A loud rock concert exposes the delicate ear structures to 115 decibels and the noise of a jet engine is 140 decibels.
A newer study found it’s not just noise that places a large population of men at risk for hearing problems. Spanish scientists discovered it is the interaction between noise and chemicals that traumatize the ear structures that can lead to sensory deficits and difficulty with hearing. The study, published in Anales del Sistema Sanitario de Navarra, found a prevalence of hearing disorders among young men working in welding factories.
Noise-induced hearing loss up in all age groupsStatistics from the American Speech-Language-Hearing Association states hearing loss in the U.S. has doubled in the past 30 years. Protecting your hearing is important for quality of life, individual safety and for remaining socially engaged — and that’s important for a long and happy life.
According to Ron Eavey, MD, director of the Vanderbilt Bill Wilkerson Center and the Guy M. Maness Professor in Otolaryngology, "You already are looking at 1 in 20 adolescents who has a notable hearing loss and 1 in 5 is showing signs that they are on the route to having hearing loss." Eavey conducted a study, published September 2010 in the Journal of the American Medical Association, citing a 5% jump in hearing loss among 12- to 19-year-olds in the past 15 years.
Thirty million Americans are exposed to hazardous noise levels on a regular basis. Noise is a known risk factor for loss of hearing from occupational exposure, airplanes, lawnmowers, concerts, fireworks, gunfire, and power tools — but the list doesn’t end there. For younger adults, iPods with cranked volume — inadvertent or otherwise — have also been implicated for hearing disorders.
Common causes of hearing loss in menJuan Carlos Conte, lead author of the study and a researcher at the University of Zaragoza, explains the hazards to metal workers. "Workers exposed to noise in the presence of metalworking fluids exhibit a delay in hearing alteration in comparison with those exposed only to noise at the same intensity. A problem we detected with respect to welding fumes in the presence of noise was that the protection used is effective for reducing the intensity of noise, but not for reducing the effects of the chemical contaminant.”
We take a look at what can be done about noise-induced hearing loss… |
The standards within the Reading Foundational Skills strand are directed toward fostering students’ understanding and working knowledge of concepts of print, the alphabetic principle, and other basic conventions of the English writing system. These foundational skills are not an end in and of themselves; rather, they are necessary and important components of an effective, comprehensive reading program designed to develop proficient readers with the capacity to comprehend texts across a range of types and disciplines.
Fluency is defined as being able to read orally with a reasonable rate of speed, with a high degree of accuracy, and with the proper expression (prosody). Fluency is one of several critical factors necessary for reading comprehension.
The word automatic is often used synonymously with fluency, but they are not the same. Automaticity refers only to accurate, speedy word recognition, not to reading with expression. Therefore, automaticity (or automatic word recognition) is necessary, but not sufficient, for fluency.
Fluency changes, depending on what readers are reading, their familiarity with the words, and the amount of their practice with reading text. Even very skilled readers may read in a slow, labored manner when reading texts with many unfamiliar words or topics. For example, readers who are usually fluent may not be able to read technical material fluently, such as a textbook about nuclear physics or an article in a medical journal. (Partnership for Reading, 2001)
The National Reading Panel concluded that repeated and monitored oral reading procedures improve reading fluency and overall reading achievement.
Kindergarteners develop fluency by reading decodable and other emergent reader text at their level.
First and second graders continue to grow in their ability to read on-level text orally with accuracy, appropriate rate, and expression on successive readings. They must be able to read those on-level texts with purpose and understanding. Children in these grades are expected to use context to confirm or self-correct word recognition and understanding,
Third grade students should be able to read on-level prose and poetry with accuracy, appropriate rate, and expression on successive readings. They, too, must demonstrate they can read on-level text with purpose and understanding and use context to confirm or self-correct word recognition and understanding,
Watch this Video: Pacing
Teacher illustrates pacing of reading text to students through clapping to sounds of nursery rhymes.
* To watch this video again, please refresh the page.
Associated Standards (kindergarten, first grade, and second grade)
CCSS RF. K.4: Read emergent-reader texts with purpose and understanding.
CCSS RF.1.4, CCSS RF.2.4 and CCSS RF.3.4: Read with sufficient accuracy and fluency to support comprehension.
- Fluency develops over time and with practice.
The ability to decode accurately and quickly (automaticity) is necessary for fluency development.
Students who are not fluent are unable to efficiently read texts across a wide range of difficulty and disciplines.
The inability to read fluently is a serious barrier to understanding what is read (comprehension) in both narrative and expository texts of increasing complexity and sophistication.
Lack of fluency leads to frustration and negatively impacts the quality and quantity of what is read. This, in turn, limits vocabulary and concept development.
Questions to Focus Instruction
- What roles do phonics and vocabulary play in the development of fluency?
Can students who struggle with basic phonics (decoding) skills develop fluency?
What strategies work the best to support students becoming fluent readers?
Can fluency be improved by having children devote more time to independent reading?
Watch this Video
Teacher uses modeling, specific examples, pre-reading, and questions to show students how a fluent reader reads. |
Essay Topic 1
Discuss the parallels between the characters and the political context of China. What do the characters represent about the evolving political organization of China? What role do Chinese politics play in the lives of the characters?
Essay Topic 2
Write an essay about honesty and dishonesty answering the following questions:
- How do characters lie to each other?
- How do characters lie to themselves?
- What effect does honesty have on their lives?
- Which holds characters and relationships together best: lies or honesty?
Essay Topic 3
Discuss the relevance of women versus men in "Waiting." Argue the statement Ha Jin is making about gender, using the following ideas:
- Emotional tendencies of men or women in relationships.
- Self awareness, idealism, control, impressionability, vulnerability.
- Selfishness and selflessness.
Essay Topic 4
Discuss how Lin's children become some of the most relevant characters in his life. Respond to...
This section contains 755 words
(approx. 3 pages at 300 words per page) |
Transcoding is the direct analog-to-analog or digital-to-digital conversion of one encoding to another, such as for movie data files (e.g., PAL, SECAM, NTSC), audio files (e.g., MP3, WAV), or character encoding (e.g., UTF-8, ISO/IEC 8859). This is usually done in cases where a target device (or workflow) does not support the format or has limited storage capacity that mandates a reduced file size, or to convert incompatible or obsolete data to a better-supported or modern format.
In the analog video world, transcoding can be performed just while files are being searched, as well as for presentation. For example, Cineon and DPX files have been widely used as a common format for digital cinema, but the data size of a two-hour movie is about 8 terabytes (TB). That large size can increase the cost and difficulty of handling movie files. However, transcoding into a JPEG2000 lossless format has better compression performance than other lossless coding technologies, and in many cases, JPEG2000 can compress images to half-size.
Transcoding is commonly a lossy process, introducing generation loss; however, transcoding can be lossless if the output is either losslessly compressed or uncompressed. The process of transcoding into a lossy format introduces varying degrees of generation loss, while the transcoding from lossy to lossless or uncompressed is technically a lossless conversion because no information is lost, however the process is irreversible and is more correctly known as destructive.
In contrast to a conversion, the prefix "trans" emphasizes a conversion from a source to a different destination.
The most popular definition of transcoding refers to a two-step process in which the original data/file is decoded to an intermediate uncompressed format (e.g., PCM for audio or YUV for video), which is then encoded into the target format.
One may also re-encode data in the same format, for a number of reasons:
- If one wishes to edit data in a compressed format (for instance, perform image editing on a JPEG image), one will generally decode it, edit it, then re-encode it. This re-encoding causes digital generation loss; thus if one wishes to edit a file repeatedly, one should only decode it once, and make all edits on that copy, rather than repeatedly re-encoding it. Similarly, if encoding to a lossy format is required, it should be deferred until the data is finalised, e.g. after mastering.
- Lower bitrate
- Transrating is a process similar to transcoding in which files are coded to a lower bitrate without changing video formats; this can include sample rate conversion, but may use an identical sampling rate with higher compression. This allows one to fit given media into smaller storage space (for instance, fitting a DVD onto a Video CD), or over a lower bandwidth channel.
- Image scaling
- Changing the picture size of video is known as transsizing, and is used if the output resolution differs from the resolution of the media. On a powerful enough device, image scaling can be done on playback, but it can also be done by re-encoding, particularly as part of transrating (such as a downsampled image requiring a lower bitrate).
One can also use formats with bitrate peeling, that allow one to easily lower the bitrate without re-encoding, but quality is often lower than a re-encode. For example, in Vorbis bitrate peeling as of 2008, the quality is inferior to re-encoding.
The key drawback of transcoding in lossy formats is decreased quality. Compression artifacts are cumulative, so transcoding causes a progressive loss of quality with each successive generation, known as digital generation loss. For this reason, transcoding is generally discouraged unless unavoidable.
For users wanting to be able to re-encode audio into any format, and for digital audio editing, it is best to retain a master copy in a lossless format (such as FLAC, ALAC, TTA, WavPack, and others) that take around half the storage space needed when compared to original uncompressed PCM formats (such as WAV, and AIFF), as lossless formats usually have the added benefit of having meta data options, which are either completely missing or very limited in PCM formats. These lossless formats can be transcoded to PCM formats or transcoded directly from one lossless format to another lossless format, without any loss in quality. They can be transcoded into a lossy format, but these copies will then not be able to be transcoded into another format of any kind (PCM, lossless, or lossy) without a subsequent loss of quality.
For image editing users are advised to capture or save images in a raw or uncompressed format, and then edit a copy of that master version, only converting to lossy formats if smaller file sized images are needed for final distribution. As with audio, transcoding from lossy format to another format of any type will result in a loss of quality.
For video editing, (for video converting), images are normally compressed directly during the recording process due to the huge file sizes that would be created if they were not, and because the huge storage demands being too cumbersome for the user otherwise. However, the amount of compression used at the recording stage can be highly variable, and is dependent on a number of factors, including the quality of images being recorded (e.g. analog or digital, standard def. or high def., etc.), and type of equipment available to the user, which is often related to budget constraints – as highest quality digital video equipment, and storage space, may be expensive. Effectively this means that any transcoding will involve some cumulative image loss, hence the best practicable solution is that the original format of recording is deemed the master copy, and subsequent finished versions (that are often in another format and/or at a much smaller file size) are edited from copies of that master.
Although transcoding can be found in many areas of content adaptation, it is commonly used in the area of mobile phone content adaptation. In this case, transcoding is a must, due to the diversity of mobile devices and their capabilities. This diversity requires an intermediate state of content adaptation in order to make sure that the source content will adequately function on the target device to which it is sent.
One of the most popular technologies in which transcoding is used is the Multimedia Messaging Service (MMS), which is the technology used to send or receive messages with media (image, sound, text and video) between mobile phones. For example, when a camera phone is used to take a digital picture, a high-quality image of usually at least 640x480 pixels is created. When sending the image to another phone, this high resolution image might be transcoded to a lower resolution image with fewer colors in order to better fit the target device's screen size and color limitations. This size and color reduction improves the user experience on the target device, and is sometimes the only way for content to be sent between different mobile devices.
Transcoding is extensively used by home theatre PC software to reduce the usage of disk space by video files. The most common operation in this application is the transcoding of MPEG-2 files to the MPEG-4 or H.264 format.
Real-time transcoding in a many-to-many way (any input format to any output format) is becoming a necessity to provide true search capability for any multimedia content on any mobile device, with over 500 million videos on the web and a plethora of mobile devices.
- "Advancements in Compression and Transcoding: 2008 and Beyond", Society of Motion Picture and Television Engineers (SMPTE), 2008, webpage: SMPTE-spm.
- Federal Standard 1037C
- List of Portable Multimedia Software
- Huifang Sun, Xuemin Chen, and Tihao Chiang, Digital Video Transcoding for Transmission and Storage, New York, CRC Press, 2005.
- Comparison of video converters
- Comparison of DVD ripper software
- Comparison of programming languages (basic instructions) § Type conversions |
|Paleoclimatic Data and the Ice Ages|
The past 1.8 million years ago make up the Quaternary period. The Quaternary period is the most recent geologic age and the one in which Homo sapiens developed. The Quaternary is characterized by the presence of a large amount of land ice, which varied from the amount we have today to much larger amounts during periods of glacier advance. By looking at the jagged valleys in the map of reconstructed temperatures during the Pleisteocene, or glacial epoch, we can see repeated Ice Ages
In this map the warm peaks are interglacial periods which include the Eemian and at the far right, the present Holocene. From this map we can see how rare are times as warm as the present. In addition, we can also see that a large global interglacial-glacial-interglacial climate oscillation has been reoccurring on approximately a 100,000 year periodicity for about the last 900,000 years, though each individual cycle has had its own idiosyncrasies in terms of the timing and magnitude of changes
The temperatures in this representation of the last 800,000 years were not obtained directly, but based on the fluctuations of global ice volume, and scaled to what is known of conditions during the last glacial maximum. They are meant to represent estimates of the mean surface temperature of the Earth
The temperatures represented in this graph are the outcome of paleoclimatic data. Analyses of geologic sediment and other layered materials extend what is known of surface temperature, precipitation, and other meteorological parameters many thousands and millions years ago. This information is categorized in several types of "natural" recording systems.
|The width and structure of the tree rings give some information on the climatic conditions when the tree ring was formed . Annual growth rings allow precise dating of climatic events going back 5000 years. The amount of carbon-14 isotope in wood indicates the intensity of solar activity during the treeÕs growth and the amount of carbon dioxide in the atmosphere. Narrow growth rings in trees indicate periods of climatic stress, while the absence of growth rings indicates a warm tropical climate with no seasons|
|>||Pollen/Spores/Insects||Found in lake sediments and bogs, may date back more than 20,000 years, can be used to reconstruct the presence of forests and other vegetation, which are indicators of climatic conditions|
|>||Ocean Sediment Cores||Ocean cores measure the history of the ocean ecology as it is laid down in the sediment. The sediment includes organic matter and the shells of various tiny sea creatures. Each of these sea creatures has its own particular niche in the ecology of the ocean. The relative abundance of some species is related to the sea surface temperature, so that relative abundance can be used to estimate sea surface temperature in the past.|
|Ice Cores||Careful analyses of the air in the bubbles and trace elements in the ice give evidence of atmospheric and climatic changes like those caused by volcanic eruption. Like in trees, the ice is made up of annual layers. In the upper parts of the ice sheets, these annual layers can be observed. The farther you go down in the ice, the deeper you go back in time, therefor the ice is compacted and stretched out as the ice flows away from the accumulation regions toward the regions where the ice sheet either melt or break off into icebergs near a marine body|
I used Paleoclimatic Data to look at the existence of the Eemian Period, Interstadials, Heirich Events, The Younger Dryas, the Holocene, and the Little Ice Ages
|130,000 Y.A = Eemian Period|
The last 130,000 have produced the most detailed paleoclimatic data from the land, the oceans and ice cores, and so this most recent climate oscillation has been the subject of most study.
The Interglacial period that began around 130,000 years ago is called the "Eemian." It appears to have begun with rapid global warming (of uncertain duration) that took the earth out of an extreme glacial phase, into conditions that are warmer than today. Regional temperatures were sometimes 1 to 2 degrees higher than that of the Holocene interglacial (the one we are still in). There are indications that there was climate instability during the Eemian, however they are controversial. In addition, there is less evidence that the temperature changes were globally synchronous, so in terms of global temperature change, conditions during the Eemian, may not have been much different from today.
Paleoclimatic data does show us that there is evidence of a single sudden cool event during the Eemian. Jonathan Adams states that "Evidence for a single sudden cool event during the Eemian is clearly present in a more solidly dated pollen record from a lake in central Europe studied by Field et al. (1994), from loess sedimentology in central china (Zhisheng & Porter 1997), and from certain ocean sediment records in the northern Atlantic (ODP site 658). These three sources of evidence each suggest a single cold and dry event (causing a several-degree decline in Atlantic surface temperature, and on land opening up the west European forests to give a mixture of steppe and trees) near the middle of the Eemian, about 121,000 y.a. It was followed by a return to warm conditions similar to the present." Paleoclimatic data also recorded from Ice cores, ocean sediment cores, and pollen records from across Eurasia show that the Eemian interglacial seems to have ended in a sudden cooling event around 110,000 years ago.
Following the Eemian there were numerous sudden changes and short lived warm and cold events. The extremes of these are the warm Interstadials and the cold Heinrich.
|12,900-11,500 Y.A. = The Younger Dryas|
The Younger Dryas were known as the return to cold conditions from 12,900-11,500 years ago. Paleoclimatic data shows this through pollen data that indicate that forests which had recently developed in Europe during the aborted warming following the ice ages were suddenly replaced again by artic shrubs, herbs and grasses, and Greenland ice cores indicate a local cooling of about 6 degrees C during this event.
|10,000 Y.A.= Holocene|
Following the Younger Dryas is the present warm epoch, known geologically as the Holocene Interglacial.
The Holocene started suddenly around 11,500 years ago. Greenland ice cores recorded a striking sudden cooling event about 8,200 years ago. This cooling event gave cool, dry conditions that lasted about 200 years, before there was a rapid return to conditions warmer and moister than today. In addition, this cooling shows up in records from North Africa across southern Asia as a phase of arid conditions due to failure of summer monsoon rains. Furthermore, the cold and aridity also looks like it hit northernmost South America, Eastern North America, and parts of North Western Europe.
|1250-1850 AD= The Little Ice Ages|
Following the Holocene there was a "Medieval Warm Period" which was followed by a longer span of considerably colder climates, often termed the "Little Ice Age" which was when the global mean temperature may have been 0.5 Ð1.0 degrees C colder than today.
Paleoclimatic evidence shows that Alpine glaciers moved
into lower elevations, rivers that rarely freeze today were often completely
ice-covered in the winter time, and precipitation patterns also changed
in many regions.
Main Page Climate Change Class |
Also found in: Dictionary, Thesaurus, Medical, Legal, Acronyms, Idioms, Encyclopedia, Wikipedia.
conflicta disagreement or divergence of interests which may result in one party taking action against another. Conflict can occur at the inter-personal, group or societal level and may involve collective or individual action. It may arise out of simple dislike of another person or out of opposed collective interests. Marxists argue that conflict is endemic to capitalist society. In their view capitalism has created two classes of people, the proletariat (i.e. paid employees) and the bourgeoisie (i.e. entrepreneurs and their supporters), whose interests are diametrically opposed. This opposition of interests in the employment sphere leads to various forms of conflict including sabotage and STRIKES. In Marx's view this conflict would lead to the overthrow of capitalism. That this has not happened in most advanced industrial societies has been attributed to various factors, including rising living standards and the institutionalization of conflict. This is the development of DISPUTES PROCEDURES and mechanisms for COLLECTIVE BARGAINING which have provided TRADE UNIONS and managers with the means to resolve many manifestations of conflict. Putting a grievance into procedure (i.e. passing it to a joint management-union committee for discussion and resolution) tends to take the heat out of an issue, thereby lowering overt conflict. Although industrial conflict has not led to revolution in countries such as the UK, radical observers argue that there is nevertheless still a fundamental conflict of interests at work and that this is manifested in less overt or more indirect forms of conflict, such as ABSENTEEISM and LABOUR TURNOVER, which do not necessarily appear to be explicitly directed against the other party.
As against the Marxist view of two diametrically opposed interests in society PLURALISM suggests that there is a plurality of interests, possibly organized in interest groups, in any society or organization. Although on occasions these interests may conflict, pluralists would dispute that such conflicts are an expression of a fundamental cleavage. Instead conflict tends to arise over specific distributional issues, such as the size of an annual pay increase, and the composition of interest groups varies according to the issue at stake. Indeed some pluralists would go further and assert that there is a basic identity of interests underneath these specific differences. Pluralists argue that conflict can be beneficial in so far as its expression (‘giving voice’) can both reduce the intensity of conflict and provide the impetus to design procedures for resolving differences.
Pluralism has been an influential approach in political science, in the study of INDUSTRIAL RELATIONS, and in ORGANIZATIONAL ANALYSIS. In industrial relations pluralists argue that TRADE UNIONS are the expression of distinct employee interests and that recognition of them by managers enables the creation of mechanisms for conflict resolution and hence for managers to regain or maintain control of work. Pluralism has been a less explicit approach in the study of organizations but has nevertheless informed much of the recent work in this area.
For instance, writers have showed that whilst all in the organization may subscribe to the organization's broad goals, various departments may acquire specific and divergent interests relating to their contribution to these goals. These interests are expressed in the decisionmaking process, making it as much a political as a rational or technical process. Although an influential approach, pluralism has been criticized for its assumption that the power of interest groups is more or less equal and that there are no fundamental structural bases to power differences in organizations and society. See INDUSTRIAL DISPUTE, INDUSTRIAL ACTION, MANAGEMENT STYLE. |
Curriculum Focal Points
NCTM's Curriculum Focal Points are the most important mathematical topics for each grade level. They comprise related ideas, concepts, skills, and procedures that form the foundation for understanding and lasting learning.
Principles and Standards for School Mathematics
The mathematical understanding, knowledge, and skills that students should acquire from pre-K through grade 12.
NCTM Presidents' Messages on Curriculum
Should High-Stakes Tests Drive the Curriculum? A Perspective from Michigan (from Mathematics Education Dialogues)
High-Stakes Tests: Should They Drive Mathematics Curriculum and Instruction? (from Mathematics Education Dialogues)
Standards-Based Teaching and Test Preparation Are Not Mutually Exclusive (from NCTM News Bulletin)
Technology: The Unused Possibilities (from NCTM News Bulletin) |
The displays of American glass show the history of its production in the American colonies and the United States from the 18th century until about 1920. The objects range from a few very rare pieces hand-blown in the earliest factories to mass-produced canning jars and bottles made in the second half of the 19th century, and art glass and cut glass pieces made in the late 19th and early 20th centuries.
Glassmaking was America’s first industry. A glass workshop was established at Jamestown, Virginia, in 1608. Severe weather and unfavorable economic factors soon forced it to close, however, and until the early 1700s, the colonists imported glass windows and table glass, as well as bottles, mostly from England.
In 1739, Caspar Wistar founded the Colonies’ first successful glass company, which was located in southern New Jersey. A rare wine bottle made by the Wistar factory is on view in the gallery [86.4.196].
Glassmaking in America increased during every decade of the 19th century. In the 1820s, the American invention of pressing made glass tableware, which had previously been purchased solely by the most prosperous citizens, affordable to middle-class households. The gallery includes a hand-operated press from the late 19th century, as well as examples of pressed glass from the earliest period until about 1900.
In the mid-1800s, as American industry and prosperity increased, a taste developed for ornate styles and complex decorations. The gallery features fine examples of 19th- and early 20th-century blown and decorated pieces, Art Glass, glass for lighting (candlesticks, lamps, and light bulbs), and cut and engraved glasses.
Many of the works on display highlight the wares of great American glass manufacturers, including the New England Glass Company (East Cambridge, Massachusetts), the Libbey Glass Company (Toledo, Ohio), and the Bakewell company (Pittsburgh, Pennsylvania). |
Better Students Ask More Questions.
Consider the following reaction: 2H2S(g)+3O2(g) yields 2SO2(g) +2H2O(g). If O2 was the...
1 Answer | add yours
High School Teacher
To solve a problem like this, look at the coefficients of the balanced chemical equation. In this case, it tells you that for every two moles of H2S you react, you will produce two moles of HOH. That assumes a 100% yield.
In the given problem, since you started with 8.3 moles of H2S, you would expect 8.3 moles of HOH at 100% yield.
In your case, you produced 137.1 g of water. Convert this to moles which means divide 137.1 by the formula mass of water (18.016 g/mole) .
137.1 g/ 18.016 g/mole = 7.61 moles of water.
To find % yield, divide the actual yield by the theoretical yield and multiply by 100.
% yield = 7.61/8.3 * 100 = 91.69%
Posted by ndnordic on February 4, 2011 at 3:28 AM (Answer #1)
Join to answer this question
Join a community of thousands of dedicated teachers and students. |
How Chandra Does What It Does
NASA: We have booster ignition and liftoff of Columbia, reaching new heights for women and X-ray Astronomy.
Martin Elvis: The main thing Chandra does is take these superb, sharp images.
Cady Coleman: Nothing as beautiful as Chandra trailing off on its way to work
Narrator: When the Chandra X-ray Observatory was launched aboard the Space Shuttle Columbia on July 23rd, 1999, it began a new era of X-ray Astronomy.
NASA: 5, 4, 3, we have a go for engine start, zero, we have booster ignition and liftoff of Columbia, reaching new heights for women and X-ray Astronomy.
Narrator: But what do X-rays from space tell us, and why do we build telescopes to study them? Most people think of light as what we can see with our human eyes. But in truth our eyes can detect just a small fraction of the radiation emitted by objects in space. Martin Elvis, a senior scientist at the Chandra X-ray Center in Cambridge, Massachusetts, discusses why astronomers want to study the sky in X-ray light.
Martin Elvis: So what's X-ray astronomy? You look up at the night sky and you see stars all over the place, stars like our own sun, big ones, small ones, old ones, young ones, but basically all stars shining with fusion power, nuclear fusion. We only see stars as dominating the night sky because we look in this really narrow band of the whole electromagnetic spectrum called the optical range. If you look well outside this range in either long wavelengths or short wavelengths, you don't see the same stars at all, so we tend to talk about X-ray sources. And they're really not stars. The sky in X-rays is not dominated by stars. Instead, it's dominated by something entirely different, that's powered not by nuclear fusion but by another process which is many times more efficient at converting mass into radiation than nuclear fusion is. Since nuclear fusion is the most efficient process we have around us on earth, that's pretty impressive. So X-ray astronomy in one sense is just trying to answer the question, "What is this stuff? When we look at the sky in X-rays, what do we see?" And it's something entirely new, something you would never have guessed just by looking up at the night sky.
Narrator: One of the most exciting things about X-ray astronomy is that it's really in its infancy. The first X-ray satellites weren't launched until the 1960's. Why the late start? The answer is because X-rays from space are absorbed by the Earth's atmosphere. Therefore, X-ray astronomy had to wait for the Space Age and the development of rockets to get its detectors into orbit.
Narrator: And what does Chandra see in the X-ray universe? When matter is heated to really high temperatures, as in millions of degrees, X-rays are produced. Here's more from Martin:
Martin Elvis: The energy of each photon of light in X-rays is about a thousand times that in optical light, visible light. So it also follows that the temperatures that we deal with in X-rays are about a thousand times hotter, so you're looking for processes that are a thousand times more energetic.
Narrator: So in order to understand what is happening in the hot, turbulent regions of space, we need to have X-ray telescopes. Chandra is using its powerful telescope to study such things as material around black holes, the debris from exploded stars, and hot gas that pervades the space between the galaxies. These are phenomena and objects that would simply be invisible in other wavelengths.
Narrator: For more information about the Chandra X-ray Observatory, visit our website at chandra.harvard.edu.
This was a production of the Chandra X-ray Center, Cambridge, Massachusetts. NASA's Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory controls science and flight operations from the CXC. |
In mechanical equipment, the motor produces motion that is called torque and the transfer of that force to the receiving mechanism is done using splined shafts. This transfer is done primarily through splined shafts, with each splined shaft interlocking with another to turn each other to transfer the torque.
Splined shafts are the ideal transferring mechanisms for high torque. There is an equal distribution of load across the length of the splined shaft, allowing for a longer service life for the splined shaft.
Creating splined shafts is not easy. The following are the design pitfalls in manufacturing splined shafts.
a. Material. In order for optimum service life of the splined shafts, it is important that it is made of high grade durable materials. Even the metal needs to be properly finished and properly cured as any imperfections would result in stress cracks during operation. In the long run, the splined shafts would break due to metal fatigue.
b. Measurements. Splined shafts interlock to move each other according to the axis of motion. This movement transfers the torque across the splined shaft. Thus, the proper measurements of the splines, the gear and the shaft have to be made and cut. Also, the female and male splines need to properly align and interlock to provide maximum transference of the torque from motor to hub.
c. Load Capacity. One of the most overlooked aspects of splined shafts is the load capacity. This is the force that the splined shafts are able to carry across its axis to transfer. If the load is too much for the splined shafts to turn, then the consequence would be disastrous. If the load is too light, then there is loss in the torque since the splined shafts become the weight instead of the carrier.
Splined shaft basics are simple. Understanding how splined shafts work is key for the proper use, operation and maintenance of mechanical equipment using these tools. |
Cold War - 2
View Full Essay
Once upon a time a battle between civilizations erupted. After the brutal war of World War II the United States and Soviet Union became two fierce rivals. Their rivalry became known as the Cold War. It was an ongoing state of military and political tensions between the western world, led by the US, and the communist world, led by the USSR. The event lasted from 1945 to 1991. During the war neither side acted upon military action, since both possessed nuclear weapons which would lead to their mutual destruction. Tensions between the US and USSR became evident when Franklin Roosevelt, Winston Churchill, and Josef Stalin met to coordinate a strategy against Nazi Germany. Roosevelt and Churchill agreed that Poland, invaded by Nazi Germany, should have the right to choose its own government while Stalin was determined to control the communist state. As the war continued both the opposing sides sought for global supremacy and the Cold War became the most politically heated war known to history.
During the Cold War the major goals of the United States included restoration of powers and containment of the Soviet Union. The first goal consisted of the US wanting to restore and rebuild the countries of Europe threatened by communism after World War II. The western world’s major aim was to stop communism from spreading. The second goal of containment was to reduce the ability of the USSR to develop power. This was to restrict the USSR from relying on communist parties. The US tried accomplishing this goal by establishing tensions among communist governments.
Goals of the Soviet Union included expanding communism and eliminating the threat of world war. By expanding communism the USSR wanted to spread communist control throughout all European countries. The Soviet Union feared nuclear war with the United States and therefore tried to reduce weapons of mass destruction.... |
Directions: Complete the Formatting Features Study Guide as you read the
directions below and try out some of the Word features in this practice
1. Choose the View tab, Print Layout in the Document Views grouping. This
allows you to see the formatting the way the document will print.
2. The horizontal alignment tools are located in the Home tab, Paragraph
grouping – Align Text Left, Center, Align Text Right, and Justify. Click the
mouse anywhere in the paragraph below. Click on the Align Text Left, then
Center, then Align Text Right, and Justify to see the effect of each type of
Text alignment refers to how the text is positioned on the page. The text alignment buttons are in the
Paragraph grouping on the Home tab. To change the alignment of text you only need to make sure your
insertion point (the line that indicates where you are typing) is somewhere in the paragraph that you want
to change, and then click on the appropriate text alignment button.
3. When a page is vertically aligned—or vertically centered—there is the same
amount of white space at the top of the page as there is at the bottom. To
align text vertically on a page, open the Page Layout tab and click the
launcher button in the lower right of the Page Setup grouping. This will
open the Page Setup dialog box. Select the Layout tab. Vertical alignment
settings are located in the Page section. Click the drop-down arrow and
select Center to change the layout. View the examples below to identify
the vertical alignment options.
Align Top Align Center Align Bottom
4. A font is a set of characters of the same
typeface or design. Examples of font are
Calibri, Arial, and Times New Roman. There
are many fonts available for use to give a
document a certain design and image. For
greater impact, font sizes can be changed. Font size is measured in points.
When working with fonts, it is wise to limit the number of fonts within a
document to two or three. The font tools are located on the Home tab,
Font grouping. Select the text below by triple clicking in the line. Change
the font of the text to Arial, point size 10.
Change the font design and size of this text.
5. Select the word paragraph in the text below by double-clicking on the
word. Click on Bold and Underline in the Font grouping. Change the font
to Arial, 16 point.
This is a practice paragraph so that I can work on formatting. This is a practice paragraph so that I can work on
formatting. This is a practice paragraph so that I can work on formatting. This is a practice paragraph so that I
can work on formatting. This is a practice paragraph so that I can work on formatting.
6. To create a page border, choose the Page Layout tab. Under the Page
Background grouping, click on Page Borders. Choose the Page Borders
tab. Make sure Box is selected in the “Setting” section. In the “Style”
section, choose a style. Notice the other options in the box, including the
Apply to: box. Click OK.
STOP! Complete Assignment #1 – Invitation
7. Page orientation refers to the way text appears on the page. Portrait
orientation means the height of the page is greater than the width.
Landscape orientation means the width is greater than the height. To
change the orientation of a page, open the Page Layout tab and select the
Orientation button in the Page Setup grouping. Choose the desired page
8. Position the insertion point anywhere in the paragraph below. Click on the
Line Spacing button ( ) in the Paragraph grouping on the Home tab.
The default line spacing in Word 2007 is 1.15. Word has also set the
default setting to Add Space After Paragraph. Change the line spacing to
1.5. Watch the spacing change. Now change the line spacing to 2 and
observe the change.
Line spacing refers to the amount of space between lines of text. By default, Word space text at 1.15.
Single-spacing has no extra space between each line. To make text more readable, you can add space
between lines of text. The 1.5 option adds half a line of space between lines. Double-spaced text has a
full blank line between each line of text.
STOP! Complete Assignment #2 – Short Report 1
9. Indent markers are used to set temporary margins in a document. An
indent will keep text at the temporary margin until the indent is reset or
removed. The indent marker is located on the ruler bar and has several
First Line Indent
Left Indent Hanging Indent
Left Indent: the left edge of a paragraph is moved in from the left edge.
Hanging Indent: subsequent lines of a paragraph are indented more
than the first line (used in Works Cited page).
First Line Indent: first line of a paragraph is indented more than the
Set the desired indent by pointing to the correct tool on the indent marker on
the ruler bar and sliding it to the desire location. Change the paragraph
below to a hanging indent by clicking inside the paragraph and sliding the
hanging indent marker to the 1-inch mark on the ruler.
Dragging an indent marker to a new location on the ruler is one way the change the indentation of a
paragraph. Another way is to use the Increase Indent and Decrease Indent buttons in the Paragraph
10.The Format Painter tool allows all the format settings (font design, size,
color, style, etc.) to be copied and applied to other text that should be
formatted the same way. This saves time by eliminating the need to change
the format settings multiple times on each piece of text. The Format Painter
button ( ) is found on the Home tab in the Clipboard grouping. To use
the Format Painter to apply format settings, click inside the text previously
formatted as desired. Click once on the Format Painter tool and “paint” over
the unformatted text. All format settings will now apply to the text. To use
the Format Painter tool multiple times, double-click on the tool. It will remain
“on” and allow multiple pieces of text to be re-formatted using the format
settings. Click again on the Format Painter tool to turn it “off”.
Use the Format Painter tool to copy the format settings of the first line of text
below to the second line of text.
Using the Format Painter can save time!
Format Painter can spice up the appearance of text!
11.To create a custom bullet, select the lines below. You can turn on the
default bullet or number using the Bullets button in the Paragraph
List of things to do.
List of things to do.
List of things to do. Bullets Numbering
List of things to do. Button Button
To change the bullet style, click on the arrow to the right of the Bullet
button. Select Define New Bullet. Click the Symbol button and change the
bullet to another style of your choosing. Click OK to exit the Symbols Dialog
Box. Click OK again to exit the Define New Bullets Dialog Box.
12.To find synonyms (words that mean the same) for words in a document,
click inside the word and then select the Review tab. Choose the
Thesaurus button in the Proofing grouping. Synonyms for the word will
appear in a task pane on the right side of the document window. Hover
over the word you would like to insert and click the arrow. Choose Insert
to replace the original word with the synonym. Use these steps to replace
the words below with synonyms:
13.Now spel chek this document. If the Spelling & Grammar button is not
visible on the Quick Access toolbar, click the arrow to the right of the
toolbar and select Spelling & Grammar to add it. With this tool added to
the toolbar, click on it to check to document for spelling and grammar
errors. You can also access the Spelling & Grammar tool under the Review
tab, Proofing grouping, or by pressing F7.
14.Text can be emphasized by highlighting, or adding a background color
behind the text. This should not be confused with changing text color,
which actually changes the color of the font. To highlight text, select the
text to be emphasized and click the Text Highlight Color button in the Font
grouping on the Home tab. Simply clicking the button will change the
highlight to the color preselected. If you wish to change the color, click the
arrow to the right of the Text Highlight Color button and choose another
color. Select all instances of Text Highlight Color in this paragraph and add
a blue highlight to the text.
15.Page numbers can be inserted to help keep documents in order. To insert
page numbers, open the Insert tab and click the Page Number arrow in
the Header & Footer grouping. This menu contains a variety of options for
placement and design of page numbers. In the “Format Page Number”
selection, the format of the numbers used can be changed. Once the page
number placement has been selected, the page number will occur on
every page of the document.
Complete Assignment #3 – Short Report 2
Complete Assignment #4 – Interview
16.Formatting a list with numbers can help organize the ideas in a document.
Numbers illustrate sequences and priorities. To format a list with numbers,
use the Numbering button ( ) in the Paragraph grouping on the
Home tab. You may turn the numbering on before typing the numbered
list to have Word automatically number as you type, or you may select the
text you wish to number and click on the Numbering button to
automatically number the items. The numbering style can be changed by
clicking on the arrow to the right of the Numbering button. Select the four
items below and use the right arrow next to the Numbering button. In the
“Numbering Library”, choose the Roman Numerals style to number the
17.Select the Words below. In the Paragraph grouping, click the arrow next
to the Borders button ( ). Choose Borders and Shading to open the
Borders and Shading dialog box. Choose an option in the “Setting” section;
select a border under “Style”. Notice the choices in the Apply to: box.
Leave paragraph selected in the Apply to: box. Click OK. The border
stretches from margin to margin. The border size can be changed by
dragging the markers on the ruler or changing the margins in Page Layout
tab, Page Setup grouping.
You’ll be an expert soon!
18.Click inside the bordered box above. To shade inside a border, open the
Borders and Shading dialog box by choosing Borders and Shading from the
Borders button. Select the Shading tab. Choose a color in the “Fill”
section and check to make sure “Paragraph” is selected in the Apply to:
section. Click OK.
19.Formatting text in columns can make it easier to read. The Columns
button under the Page Layout tab in the Page Setup grouping. This allows
you to easily format text into columns. You may also define other column
options in the Columns dialog box under “More Columns” in the selection
list. Select the text below and click the Columns button. Choose “More
Columns” from the selection list. Choose “Two” in the “Presets” section
and put a check in the “Line between” box. Click OK to see the changes.
A newsletter is an example of a document that is often formatted in columns. When
columns are inserted, they will be equal width unless you specify otherwise. You can
use a line to separate columns and you may convert existing text into columns of
create the columns before you key the text.
STOP! Complete Assignment #5 – Race Track
20.Drag your mouse over the schedule below to select it. Select the launcher
in the Paragraph grouping, then click the Tabs button in the lower left
corner to open the Tabs dialog box. Click on the Clear All button. Set a tab
at 2.5 in the Tab stop position, choose Left in the Alignment section, and
click Set. Set a tab at 5.0 in the Tab stop position, choose Right in the
Alignment section, and click on Set. Click OK.
Wake Up 6:00
Eat Breakfast 7:00
Tabs can also be set by clicking with the mouse on the ruler. To change the
type of tab (left, right, decimal, center), click on the tab indicator box on the
far left of the ruler. Choose the desired tab type and click on the ruler bar at
the location you wish to set the tab.
Left Tab Right Tab
Center Tab Decimal Tab
21.Leaders can also be set in the Tabs dialog box. A leader is a line of periods
or dashes the lead the eye to the next entry. To insert leaders with tabs,
simply select the style of leader desired in the “Leader” section of the Tabs
dialog box. Select the Table of Contents below. Open the Tabs dialog box
by clicking on the launcher in the Paragraph grouping. Click the Tabs
button. First, click on the Clear All button then type 5.5 in the tab stop
position box. Choose Right in the Alignment section, and choose style 2 in
the Leader section. Click OK to close the dialog box.
Chapter 1 2
Chapter 2 8
Chapter 3 14
22.You can convert text you have already typed to a table by opening the
Insert tab and clicking on the Table button in the Tables grouping. Select
the Table of Contents you created in Step 21. (Note: It is important to
select only the lines that should be included in the table. Selecting lines
above or below the desired table text will result in errors.) Choose the
Insert tab and click on the Table button. Choose “Convert Text to Table”
from the selection list. The Convert Table to Text dialog box opens. In the
“Table size” section, key “2” as the Number of columns; in the “AutoFit
behavior” section, choose “AutoFit to content”; and in the “Separate text
at” section, click on “Tabs”. Click OK and view the changes. Now click the
Undo button on the Quick Access toolbar to reverse the changes and show
the Table of Contents with leaders again.
Complete Assignment #6 – State Statistics
Complete Assignment #7 – Shipping |
Task-based language learning
Task-based language teaching (TBLT), also known as task-based instruction (TBI), focuses on the use of authentic language and on asking students to do meaningful tasks using the target language. Such tasks can include visiting a doctor, conducting an interview, or calling customer service for help. Assessment is primarily based on task outcome (in other words the appropriate completion of real world tasks) rather than on accuracy of prescribed language forms. This makes TBLT especially popular for developing target language fluency and student confidence. As such TBLT can be considered a branch of communicative language teaching (CLT).
TBLT was popularized by N. Prabhu while working in Bangalore, India. Prabhu noticed that his students could learn language just as easily with a non-linguistic problem as when they were concentrating on linguistic questions. Major scholars who have done research in this area include Teresa P. Pica, Martin East and Michael Long.
Task-based language learning has its origins in communicative language teaching, and is a subcategory of it. Educators adopted task-based language learning for a variety of reasons. Some moved to task-based syllabus in an attempt to make language in the classroom truly communicative, rather than the pseudo-communication that results from classroom activities with no direct connection to real-life situations. Others, like Prabhu in the Bangalore Project, thought that tasks were a way of tapping into learners' natural mechanisms for second-language acquisition, and weren't concerned with real-life communication per se.
Definition of a task
- A task involves a primary focus on (pragmatic) meaning.
- A task has some kind of ‘gap’ (Prabhu identified the three main types as information gap, reasoning gap, and opinion gap).
- The participants choose the linguistic resources needed to complete the task.
- A task has a clearly defined, non-linguistic outcome.
The core of the lesson or project is, as the name suggests, the task. Teachers and curriculum developers should bear in mind that any attention to form, i.e., grammar or vocabulary, increases the likelihood that learners may be distracted from the task itself and become preoccupied with detecting and correcting errors and/or looking up language in dictionaries and grammar references. Although there may be several effective frameworks for creating a task-based learning lesson, here is a basic outline:
In the pre-task, the teacher will present what will be expected of the students in the task phase. Additionally, in the "weak" form of TBLL, the teacher may prime the students with key vocabulary or grammatical constructs, although this can mean that the activity is, in effect, more similar to the more traditional present-practice-produce (PPP) paradigm. In "strong" task-based learning lessons, learners are responsible for selecting the appropriate language for any given context themselves. The instructors may also present a model of the task by either doing it themselves or by presenting picture, audio, or video demonstrating the task.
During the task phase, the students perform the task, typically in small groups, although this depends on the type of activity. Unless the teacher plays a particular role in the task, the teacher's role is typically limited to one of an observer or counselor—thereby making it a more student-centered methodology.
If learners have created tangible linguistic products, e.g. text, montage, presentation, audio or video recording, learners can review each other's work and offer constructive feedback. If a task is set to extend over longer periods of time, e.g. weeks, and includes iterative cycles of constructive activity followed by review, TBLL can be seen as analogous to Project-based learning.
Types of task
According to N. S. Prabhu, there are three main categories of task; information-gap, reasoning-gap, and opinion-gap.
Information-gap activity, which involves a transfer of given information from one person to another – or from one form to another, or from one place to another – generally calling for the decoding or encoding of information from or into language. One example is pair work in which each member of the pair has a part of the total information (for example an incomplete picture) and attempts to convey it verbally to the other. Another example is completing a tabular representation with information available in a given piece of text. The activity often involves selection of relevant information as well, and learners may have to meet criteria of completeness and correctness in making the transfer.
Reasoning gap activity, which involves deriving some new information from given information through processes of inference, deduction, practical reasoning, or a perception of relationships or patterns. One example is working out a teacher's timetable on the basis of given class timetables. Another is deciding what course of action is best (for example cheapest or quickest) for a given purpose and within given constraints. The activity necessarily involves comprehending and conveying information, as in information-gap activity, but the information to be conveyed is not identical with that initially comprehended. There is a piece of reasoning which connects the two.
Opinion gap activity, which involves identifying and articulating a personal preference, feeling, or attitude in response to a given situation. One example is story completion; another is taking part in the discussion of a social issue. The activity may involve using factual information and formulating arguments to justify one's opinion, but there is no objective procedure for demonstrating outcomes as right or wrong, and no reason to expect the same outcome from different individuals or on different occasions.
According to Jon Larsson, in considering problem based learning for language learning, i.e., task based language learning:
- ...one of the main virtues of PBL is that it displays a significant advantage over traditional methods in how the communicative skills of the students are improved. The general ability of social interaction is also positively affected. These are, most will agree, two central factors in language learning. By building a language course around assignments that require students to act, interact and communicate it is hopefully possible to mimic some of the aspects of learning a language “on site”, i.e. in a country where it is actually spoken. Seeing how learning a language in such an environment is generally much more effective than teaching the language exclusively as a foreign language, this is something that would hopefully be beneficial.
Larsson goes on to say:
- Another large advantage of PBL is that it encourages students to gain a deeper sense of understanding. Superficial learning is often a problem in language education, for example when students, instead of acquiring a sense of when and how to use which vocabulary, learn all the words they will need for the exam next week and then promptly forget them.
- In a PBL classroom this is combatted by always introducing the vocabulary in a real-world situation, rather than as words on a list, and by activating the student; students are not passive receivers of knowledge, but are instead required to actively acquire the knowledge. The feeling of being an integral part of their group also motivates students to learn in a way that the prospect of a final examination rarely manages to do.
Task-based learning benefits students because it is more student-centered, allows for more meaningful communication, and often provides for practical extra-linguistic skill building. As the tasks are likely to be familiar to the students (e.g.: visiting the doctor), students are more likely to be engaged, which may further motivate them in their language learning.[according to whom?]
According to Jeremy Harmer, tasks promote language acquisition through the types of language and interaction they require. Harmer says that although the teacher may present language in the pre-task, the students are ultimately free to use what grammar constructs and vocabulary they want. This allows them, he says, to use all the language they know and are learning, rather than just the 'target language' of the lesson. On the other hand, according to Loschky and Bley-Vroman, tasks can also be designed to make certain target forms 'task-essential,' thus making it communicatively necessary for students to practice using them. In terms of interaction, information gap tasks in particular have been shown[by whom?] to promote negotiation of meaning and output modification.
According to Plews and Zhao, task-based language learning can suffer in practice from poorly informed implementation and adaptations that alter its fundamental nature. They say that lessons are frequently changed to be more like traditional teacher-led presentation-practice-production lessons than task-based lessons.
Professional conferences and organizations
As an outgrowth of the widespread interest in task-based teaching, the Biennial International Conference on Task-Based Language Teaching has occurred every other year since 2005. Past conferences have been held in Belgium, the United States, England, New Zealand, Canada, with the 2017 conference scheduled to take place in Barcelona, Spain. These events promote theoretical and practical research on TBLT. In addition, the Japan Association for Language Teaching has a special interest group devoted to task-based learning, which has also hosted its own conference in Japan.
Related approaches to language teaching
- Problem Based Learning is a student-centered pedagogy in which students learn about a subject in the context of complex, multifaceted, and realistic problems.
- Content-based instruction incorporates authentic materials and tasks to drive language instruction.
- Content and language integrated learning (CLIL) is an approach for learning content through an additional language (foreign or second), thus teaching both the subject and the language. The idea of its proponents was to create an "umbrella term" which encompasses different forms of using language as medium of instruction.
- Communicative language teaching
- Content-based instruction
- Content and language integrated learning
- English as a second or foreign language
- Input hypothesis
- Problem-based learning
- Project-based learning
- Second-language acquisition
- Harmer 2001, p. 86.
- Leaver & Willis 2004, pp. 7–8.
- Ellis 2003.
- Frost & unknown.
- Larsson 2001.
- Prabhu 1987.
- Harmer 2001, pp. 79-80.
- Loschky & Bley-Vroman 1993.
- Doughty & Pica 1986.
- Pica, Kang & Sauro 2006.
- Plews & Zhao 2010.
- "Welcome to TBLT". Tblt.org. Retrieved 2016-05-07.
- "TBLT2007 About TBLT". Hawaii.edu. 2009-09-16. Retrieved 2016-05-07.
- "TBLT 2009: 3rd Biennial International Conference on Task-Based Language Teaching". Lancs.ac.uk. 2009-09-16. Retrieved 2016-05-07.
- "4th Biennial International Conference on Task-Based Language Teaching". Conferencealerts.com. Retrieved 2016-05-07.
- "TBLT 2013 - International Conference on Task-Based Language Teaching". Educ.ualberta.ca. 2013-10-05. Retrieved 2016-05-07.
- "Task-based Learning Special Interest Group". Tblsig.org. Retrieved 2016-05-07.
- "Content and language integrated learning". European Commission. Retrieved 26 January 2013.
- Doughty, Catherine; Pica, Teresa (1986). ""Information Gap" Tasks: Do They Facilitate Second Language Acquisition?". TESOL Quarterly. 20 (2): 305–325. doi:10.2307/3586546.
- Ellis, Rod (2003). Task-based Language Learning and Teaching. Oxford, New York: Oxford Applied Linguistics. ISBN 0-19-442159-7.
- Frost, Richard. "A Task-based Approach". British Council Teaching English. Retrieved September 21, 2015.
- Harmer, Jeremy (2001). The Practice of English Language Teaching (3rd ed.). Essex: Pearson Education.
- Larsson, Jon (2001). "Problem-Based Learning: A possible approach to language education?" (PDF). Polonia Institute, Jagiellonian University. Retrieved 27 January 2013.
- Leaver, Betty Lou; Willis, Jane Rosemary (2004). Task-Based Instruction In Foreign Language Education: Practices and Programs. Georgetown University Press. ISBN 978-1-58901-028-4.
- Loschky, L.; Bley-Vroman, R. (1993). "Grammar and Task-Based Methodology". In Crookes, G.; Gass, S. Tasks and Language Learning: Integrating Theory and Practice. Philadelphia: Multilingual Matters. ISBN 978-058524356-6.
- Pica, Teresa; Kang, Hyun-Sook; Sauro, Shannon (2006). "Information gap tasks: Their multiple roles and contributions to interaction research methodology". Studies in Second Language Acquisition. 28: 301–338. doi:10.1017/s027226310606013x.
- Plews, John L.; Zhao, Kangxian (2010). "Tinkering with tasks knows no bounds: ESL Teachers’ Adaptations of Task-Based Language-Teaching". TESL Canada Journal. Retrieved 26 January 2013.
- Prabhu, N. S. (1987). "Second Language Pedagogy". Oxford University Press. Retrieved 18 January 2013.
- Willis, Jane (1996). A Framework for Task-Based Learning. Longman. |
Sitting with my grade 10s and a scrap piece of paper, this week I sketched out how keys and their key changes work. This was part of their "Elements of Music" unit, and our current lesson was focusing on harmony, then specifically on keys. The students understood very well and the drawing was very effective... so much so that after class, students were taking photos of my sketch. That's when I realised that this was a resource that needed to be shared!
Now here is the technical information:
This is an annotated infographic created to explain related major and minor keys. In the centre, you have a chord ladder built on C Major, labelled with Roman numerals showing chord tonalities. With it comes an image of its key signature. From there, four arrows point to related keys, including the subdominant, dominant, relative minor, and parallel minor. These examples also contain chord ladders and their respective key changes. Down the left side of the paper, there is an explanation of related keys, borrowed keys, modulations, pivot chords, etc.
This theory sheet is most appropriate for theory learners in upper years, including grades 10 - 12 or Diploma Programme. |
Do we live in a holographic universe? How green is your coffee? And could drinking too much water actually kill you?
Before you click those links you might consider how your knowledge-hungry brain is preparing for the answers. A new study from the University of California, Davis, suggests that when our curiosity is piqued, changes in the brain ready us to learn not only about the subject at hand, but incidental information, too.
Neuroscientist Charan Ranganath and his fellow researchers asked 19 participants to review more than 100 questions, rating each in terms of how curious they were about the answer. Next, each subject revisited 112 of the questions - half of which strongly intrigued them whereas the rest they found uninteresting - while the researchers scanned their brain activity using functional magnetic resonance imaging (fMRI).
During the scanning session participants would view a question then wait 14 seconds and view a photograph of a face totally unrelated to the trivia before seeing the answer. Afterward the researchers tested participants to see how well they could recall and retain both the trivia answers and the faces they had seen.
Ranganath and his colleagues discovered that greater interest in a question would predict not only better memory for the answer but also for the unrelated face that had preceded it. A follow-up test one day later found the same results - people could better remember a face if it had been preceded by an intriguing question. Somehow curiosity could prepare the brain for learning and long-term memory more broadly.
The findings are somewhat reminiscent of the work of U.C. Irvine neuroscientist James McGaugh, who has found that emotional arousal can bolster certain memories. But, as the researchers reveal in the October 2 Neuron, curiosity involves very different pathways.
To understand what exactly had occurred in the brain the researchers turned to their imaging data. They discovered that brain activity during the waiting period before an answer appeared could predict later memory performance. Several changes occurred during this time.
First, brain activity ramped up in two regions in the midbrain, the ventral tegmental area and nucleus accumbens. These regions transmit the molecule dopamine, which helps regulate the sensation of pleasure and reward. This suggests that before the answer had appeared the brain's eager interest was already engaging the reward system. "This anticipation was really important," says Ranganath's co-author, U.C. Davis cognitive neuroscientist Matthias Gruber. The more curious a subject was, the more his or her brain engaged this anticipatory network.
In addition, the researchers found that curious minds showed increased activity in the hippocampus, which is involved in the creation of memories. In fact, the degree to which the hippocampus and reward pathways interacted could predict an individual's ability to remember the incidentally introduced faces. The brain's reward system seemed to prepare the hippocampus for learning.
The implications are manifold. For one, Ranganath suspects the findings could help explain memory and learning deficits in people with conditions that involve low dopamine, such as Parkinson's disease.
Piquing curiosity could also help educators, advertisers and storytellers find ways to help students or audiences better retain messages. "This research advances our understanding of the brain structures that are involved in learning processes," says Goldsmiths, University of London psychologist Sophie von Stumm, unconnected to the study. She hopes other researchers will replicate the work with variations that can clarify the kinds of information curious people can retain and whether results differ for subjects who have broad 'trait' curiosity as opposed to a temporarily induced specific interest.
Ranganath's findings also hint at the nature of curiosity itself. Neuroscientist Marieke Jepma at the University of Colorado Boulder, who also did not participate in this study, has previously found that curiosity can be an unpleasant experience, and the brain's reward circuitry might not kick in until there is resolution. She suspects, however, that her findings and Ranganath's results are two sides of the same coin. To explain this, she refers to the experience of reading a detective novel. "Being uncertain about the identity of the murderer may be a pleasant reward-anticipating feeling when you know this will be revealed," she says. "But this will turn into frustration if the last chapter is missing."
Ranganath agrees that the hunger for knowledge is not always an agreeable experience. "It's like an itch that you have to scratch," he says. "It's not really pleasant."
Source: Scientific American |
Embers of War
'Embers of War' is an essential read on the tragedy of the Vietnam War.
Question: who started the Vietnam War? Answer: the French.
Americans can be forgiven for remembering the war as a contest between the US and the Vietnamese, but in his new book, Embers of War, Fredrik Logevall vividly shows how incomplete such a recollection would be.
Logevall is an historian at Cornell University whose resume boasts several books on the Vietnam War. Here he shows how the French colonization of Southeast Asia that began in the late 19th century deeply influenced the Americans’ failures in that region.
"Embers of War" begins its story with future Vietnamese leader Ho Chi Minh attending the post-World War I peace negotiations. Ho, as he was universally known, believed American president Woodrow Wilson was sincere in his declaration that all nations had the right to self-determination. It would be the first of Ho’s many disappointments as American leaders failed to live up to their words.
Indeed, one of the many ironies Logevall highlights is that Ho was often more faithful to American ideals and pronouncements about national freedom than were US presidents, from Harry Truman to Richard Nixon. Having spent time in the United States in his 20s, Ho was a lifelong admirer of American principles and actions, from the country's 1770s revolt against the British to the distance that it kept from European imperialism in Asia.
But Ho was mystified by the way that those values were simply disregarded in Vietnam. Though President Franklin Roosevelt opposed European imperialism, the Cold War soon led US leaders to support French control over Vietnam. Initially, America reluctantly acceded to France’s desires in the region, hesitant to disrupt to France’s precarious postwar stability by encouraging the country to dissolve its overseas empire. By the mid-1950s, the positions had switched: France wanted to depart its costly occupation of Vietnam, while the US was terrified that a victory for Ho would lead to communist control of all of Asia.
This argument was called the "domino theory," and every Cold War president believed in it. They were all wrong.
As Logevall explains, the domino theory “posited that the countries of East and Southeast Asia had no individuality, no history of their own, no unique circumstances in social, political, and economic life that differentiated them from their neighbors.” More recently, a 2007 study of over 130 countries in the 20th century found that states are only rarely influenced by changes in their neighbors’ internal structures.
"Embers of War" details the tragic history of America’s assumption of the French burden in Vietnam. Not only did the US gradually become enmeshed in Southeast Asia, it did so with willful disregard for what the French experience could have taught them. Said General William Westmoreland, who led the military from 1964 to 1972: “Why should I study the lessons of the French? They haven’t won a war since Napoleon.”
Had American leaders studied the French failures in the region, they might have realized that they were repeating the mistakes of their ally, including the biggest mistake of all: failure to understand that the Vietnamese were willing to do anything to achieve independence from foreign rule. The country lost approximately 3 million soldiers and civilians fighting the Americans, who lost 58,000 men. Any people willing to endure such horrible, lopsided losses could simply outlast its adversary. The Vietnamese succeeded.
"Embers of War" is simply an essential work for those seeking to understand the worst foreign-policy adventure in American history. Logevall has tapped new resources, including extensive archives in France and what is available in Vietnam. He has a complete grasp of the vast literature on what the Vietnamese call The American War, and even though readers known how the story ends – as with "The Iliad" – they will be as riveted by the tale as if they were hearing it for the first time.
The only misstep in this fantastic book is Logevall’s description of the writer Graham Greene’s time in Southeast Asia. Greene’s "The Quiet American" is the best English-language novel on the Vietnam War (Vietnamese writer Bao Ninh’s "Sorrow of War" wins the prize for best novel in any language), but it deserves only a line or two in the history of France and America in Vietnam. An entire chapter in "Embers of War" is devoted to Greene, unnecessarily disrupting the book’s focus on international diplomatic history.
Such a mistake aside, Logevall makes good on his attempt to write the “full-fledged international account of how the whole saga began, a book that takes us from the end of World War, when the future of European colonial empires still seemed secure, through World War II and then the Franco-Viet Minh War and its dramatic climax, to the fateful American decision to build up and Defend South Vietnam.”
Jordan Michael Smith is a contributing writer at Salon and the Christian Science Monitor, and a contributing editor at The American Conservative. |
The Parliament of Canada established December 6 as the National Day of Remembrance and Action on Violence Against Women in Canada in 1991. This is the anniversary day 14 young women at l’École Polytechnique de Montréal were murdered in 1989. This violent gender-based act shocked the nation.
- This day commemorates the lives of the 14 young women
- This day gives Canadians the opportunity to reflect on the phenomenon of violence against women in our society
- This day is a day to remember those who have died as a result of gender-based violence
- This day is a day to take action to eliminate all forms of violence against women and girls |
Learn something new every day More Info... by email
When a child is born with an eye condition it is known as a congenital eye defect. The condition can affect the development of the eyes and impair vision. Depending on the specific defect it may or may not be hereditary. In some cases it can occur as a result of infection, medications or because of an illness or disease suffered by the birth mother while pregnant. Leber's congenital amaurosis, congenital cataracts and primary congenital glaucoma are all common congenital defects.
Leber's congenital amaurosis is an inherited congenital eye defect in which a child is born without vision or with extremely poor eyesight. There are certain symptoms that can accompany this defect, such as rapid eye movement, crossed eyes or eyes that appear cloudy. In some cases the child may also have some form of mental retardation.
Congenital cataracts are another type of eye defect that occurs at birth. A cataract is defined as a clouding of the natural lens that is located in the eye. It may develop for a number of reasons, such as the mother having an infection while pregnant or as a result of her taking certain types of medications. Congenital cataracts may also be the result of other conditions, such as Down's Syndrome. In many cases, it is difficult to pinpoint the actual cause of the condition.
Primary congenital glaucoma is a defect in which the eye's drainage system malfunctions or does not properly form. This often causes increased pressure in the eye because the fluid that would normal use the drainage system cannot drain and builds up as a result. When this occurs for a long period of time it may cause damage to the nerve that carries information from the eye to the brain. If this nerve, called the optic nerve is damaged, it may result in blindness or some loss of eyesight. Some of the common signs that a baby has this condition are light sensitivity, hazy corneas, tearing, and enlarged eyes.
If a congenital eye defect causes a loss of eyesight, this loss may occur immediately or after months or even years. It is important to discover congenital eye defects as quickly as possible. While not all defects can be improved, early detection and treatment may save the child's vision.
Not every congenital eye defect leads to blindness. Heterochromia is a congenital defect in which one eye is a different color than the other. While it may occur as a result of other conditions, it is frequently non-problematic and the eyes often function properly. Congenital ptosis, or drooping eyelids, is also a condition that does not always have a negative effect on eyesight, although it may indicate other problems with the eyes or health.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Rights of Nature is the recognition and honoring that Nature has rights. It is the recognition that our ecosystems – including trees, oceans, animals, mountains – have rights just as human beings have rights. Rights of Nature is about balancing what is good for human beings against what is good for other species, what is good for the planet as a world. It is the holistic recognition that all life, all ecosystems on our planet are deeply intertwined.
Rather than treating nature as property under the law, rights of nature acknowledges that nature in all its life forms has the right to exist, persist, maintain and regenerate its vital cycles.
And we – the people – have the legal authority and responsibility to enforce these rights on behalf of ecosystems. The ecosystem itself can be named as the defendant.
For indigenous cultures around the world recognizing rights of nature is simply what is so and consistent with their traditions of living in harmony with nature. All life, including human life, are deeply connected. Decisions and values are based on what is good for the whole.
Nonetheless, for millennia legal systems around the world have treated land and nature as “property”. Laws and contracts are written to protect the property rights of individuals, corporations and other legal entities. As such environmental protection laws actually legalize environmental harm by regulating how much pollution or destruction of nature can occur within the law. Under such law, nature and all of its non-human elements have no standing.
By recognizing rights of nature in its constitution, Ecuador – and a growing number of communities in the United States – are basing their environmental protection systems on the premise that nature has inalienable rights, just as humans do. This premise is a radical but natural departure from the assumption that nature is property under the law.
Learn more about Rights of Nature through: |
Exploring Ratios and Proportions Study Guide
Introduction to Exploring Ratios and Proportions
Mathematics is a language.
—JOSIAH WILLARD GIBBS, theoretical physicist (1839–1903)
This lesson begins by exploring ratios, using familiar examples to explain the mathematics behind the ratio concept. It concludes with the related notion of proportions, again illustrating the math with everyday examples.
A ratio is a comparison of two numbers. For example, let's say that there are 3 men for every 5 women in a particular club. That means that the ratio of men to women is 3 to 5. It doesn't necessarily mean that there are exactly 3 men and 5 women in the club, but it does mean that for every group of 3 men, there is a corresponding group of 5 women. The table below shows some of the possible sizes of this club.
In other words, the number of men is 3 times the number of groups, and the number of women is 5 times that same number of groups.
A ratio can be expressed in several ways:
- using "to" (3 to 5)
- using "out of" (3 out of 5)
- using a colon (3:5)
- as a fraction
- as a decimal (0.6)
Like a fraction, a ratio should always be reduced to lowest terms. For example, the ratio of 6 to 10 should be reduced to 3 to 5 (because the fraction reduces to ).
Here are some examples of ratios in familiar contexts:
||Last year, it snowed 13 out of 52 weekends in New York City. The ratio 13 out of 52 can be reduced to lowest terms (1 out of 4) and expressed as any of the following:|
|Reducing to lowest terms tells you that it snowed 1 out of 4 weekends, ( of the weekends or 25% of the weekends).|
||Lloyd drove 140 miles on 3.5 gallons of gas, for a ratio (or gas consumption rate) of 40 miles per gallon: miles per gallon|
||The student-teacher ratio at Clarksdale High School is 7 to 1. That means for every 7 students in the school, there is 1 teacher. For example, if Clarksdale has 140 students, then it has 20 teachers. (There are 20 groups, each with 7 students and 1 teacher.)|
||Pearl's Pub has 5 chairs for every table. If it has 100 chairs, then it has 20 tables.|
||The Pirates won 27 games and lost 18, for a ratio of 3 wins to 2 losses. Their win rate was 60%, because they won 60% of the games they played.|
In word problems, the word per translates to division. For example, 30 miles per hour is equivalent to . Phrases with the word per are ratios with a bottom number of 1, like these:
|24 miles per gallon||$12 per hour|
|3 meals per day||4 cups per quart|
Ratios can also be used to relate more than two items, but then they are not written as fractions. Example: If the ratio of infants to teens to adults at a school event is 2 to 7 to 5, it is written as 2:7:5.
Today on Education.com
- Coats and Car Seats: A Lethal Combination?
- Kindergarten Sight Words List
- Child Development Theories
- Signs Your Child Might Have Asperger's Syndrome
- 10 Fun Activities for Children with Autism
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- First Grade Sight Words List
- Social Cognitive Theory
- The Homework Debate
- GED Math Practice Test 1 |
Charles Darwin's evolutionary
basis for morality
Charles Darwin (1809-1882) was the first to consider an evolutionary basis for moral development. When Darwin
wrote The Origin of Species (1859), he withheld his thoughts on the origins of human morality and
By 1874, his book, The Descent of Man, proposed that the study of moral systems take place as a branch
of natural history, which meant that ethics should be studied within the framework of evolution theory. While he
theorized that man evolved from a more primitive species, he sought to understand how and why man developed a moral
sense. The theory of human evolution could only be plausible if it could explain how morality, a distinctly
unique human trait, originated.
When studying ethics from the perspective of behavior, Darwin recognized the problem posed by natural selection
and the human ethic of altruism; how can selfless behavior exist amidst the selfishness inherent and promoted in
Had Darwin studied ethics from the perspective of behaviors and actions, the problems posed by natural selection
would have been highlighted. To avoid this conundrum, he approached ethics from 2 other perspectives: a) conscience
and b) as a system established by a group.
A) In examining ethics from the perspective of conscience, ethics was not what one did but
why. So his focus was on the psychology of behavior - the reasons and motives for one’s actions. Ethical
behavior arose from ethical intentions or moral sense as he would say.
Darwin felt that human morality was based on the integration of three factors: innate social
instincts, intelligence, and conscious. He believed that Social instincts, such as sympathy, kindness, and sociability,
were rooted in human nature and limited aggression. He saw intelligence involved with the evaluation of actions and
their consequences. And he believed that the conscious was responsible for the motives for behavior.
B) In examining ethics from the perspective of a system established by a group, Darwin sought to answer
the selfishness of natural selection with the focus on the ethics of the "greater good" for the survival of a group.
Darwin felt that social groups had an implicit social contract, which governed behavior and promoted
mutual interaction. Moral standards were determined by society and reinforced with a social system of rewards and
punishments. From this perspective, Darwin proposed the idea of natural selection for group evolution with the belief
that the selected traits of individuals and small sub-groups would confer an advantage for the larger group over others
According to Darwin, the answers for the two essential questions were: 1) the standard for judging good from evil
was whether the most people gained pleasure or least pain from one’s action, and 2) human beings were innately altruistic
and moral, because these traits were an advantage in natural selection of groups.
If you find this page by itself, you can learn more about it and its context
by clicking our URL here, and it will take you to the page that it is associated with.
A resource for learning how to read the Bible. |
Tularemia is a bacterial infection common in wild rodents. The bacteria are passed to humans through contact with tissue from the infected animal. The bacteria can also be passed by ticks, biting flies, and mosquitoes.
Deerfly fever; Rabbit fever; Pahvant Valley plague; Ohara disease; Yato-byo (Japan); Lemming fever
Tularemia is caused by the bacterium Francisella tularensis.
Humans can get the disease through:
- A bite from an infected tick, horsefly, or mosquito
- Breathing in infected dirt or plant material
- Direct contact, through a break in the skin, with an infected animal or its dead body (most often a rabbit, muskrat, beaver, or squirrel)
- Eating infected meat (rare)
The disorder most commonly occurs in North America and parts of Europe and Asia. In the US, this disease is found more often in Missouri, South Dakota, Oklahoma, and Arkansas. Although outbreaks can occur in the United States, they are rare.
Some people may develop pneumonia after breathing in infected dirt or plant material. This infection has been known to occur on Martha's Vineyard, where bacteria are present in rabbits, raccoons, and skunks.
Symptoms develop 3 to 5 days after exposure. The illness usually starts suddenly. It may continue for several weeks after symptoms begin.
- Eye irritation (conjunctivitis, if the infection began in the eye)
- Joint stiffness
- Muscle pains
- Red spot on the skin, growing to become a sore (ulcer)
- Shortness of breath
- Weight loss
Exams and Tests
Tests for the condition include:
- Blood culture for the bacteria
- Blood test measuring the body's immune response to the infection (serology for tularemia)
- Chest x-ray
- Polymerase chain reaction (PCR) test of a sample from an ulcer
The goal of treatment is to cure the infection with antibiotics.
The antibiotics streptomycin and tetracycline are commonly used to treat this infection. Another antibiotic, gentamicin, has been tried as an alternative to streptomycin. Gentamicin seems to be very effective, but it has been studied in only a small number of people because this is a rare disease. The antibiotics tetracycline and chloramphenicol can be used alone, but are not usually a first choice.
Tularemia is fatal in about 5% of untreated cases, and in less than 1% of treated cases.
Tularemia may lead to these complications:
- Bone infection (osteomyelitis)
- Infection of the sac around the heart (pericarditis)
When to Contact a Medical Professional
Call your health care provider if symptoms develop after a rodent bite, tick bite, or exposure to the flesh of a wild animal.
Preventive measures include wearing gloves when skinning or dressing wild animals, and staying away from sick or dead animals.
Penn RL. Francisella tularensis (Tularemia). In: Bennett JE, Dolin R, Blaser MJ, eds. Mandell, Douglas, and Bennett's Principles and Practice of Infectious Disease. 8th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 229.
Schaffner W. Tularemia and other Francisella infections. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine. 25th ed. Philadelphia, PA: Elsevier Saunders; 2016:chap 311. |
Objective Reasoning means reasoning according to a set of logical and objective standards, while subjective thinking refers to reasoning without objective standards. Objective reasoning means reasoning that is independent of the specific subjective context, not influenced by personal characteristics, feelings or opinions of the subject.1 An idea can be said to be objective when it is not conditioned by the subject stating it; when it expresses a reality without subjectively modifying it. Objective thinking implies an impartial and balanced inquiry. In reasoning one has to define what is of value and relevance, in so doing assign weight to the different factors involved. This assigning of importance to the different factors involved then defines much of the context to the following process of reasoning and its outcomes. when this is done subjectively an overemphasis is placed on some factors while others are diminished, depending on the character of the subject doing the reasoning. Objective thinking, however, implies an impartial balanced inquiry that applies relevant weight to the different factors involved in the process. This objective view of the world does not come naturally to human beings, in fact, quite the contrary, the biological evolutionary context to our condition originates in a very subjective perspective, one which leads to an imbalanced view of the world.
Subjective thinking, in the form of egocentric thinking for example, comes natural to humans, we do not have to train people to believe what they want to believe, what one wants to believe is what one will naturally believe. In contrary people need to be trained or train themselves if they want to believe something other than this. Whereas the subjective thinking of altruism and egoism both lead to an imbalanced valorization and emphasis – either on the individual or on others – objective thinking strives to overcome this in order to achieve a balanced judgment.
Objective thinking represents the capacity to experience phenomena as in some way independent from our subjective condition; to be able to regard other entities as existing in and for themselves independent of our own will. As this is not an innate feature of human cognition it involves having to develop a framework based on some objective logic that can define the value of things independently from their significance and value in relation to one’s own agendas and desires. It is only in being able to do this that one can ascribe the appropriate significance to things and thus make a balance inquiry that is a central part of critical thinking.
Objectivity thinking is only really achieved by creating standards because the results of one’s thinking cannot be any better than the quality of the process by which conclusions are reached.2 With the use of standards one can develop objective thinking; the placing of an objective value on phenomena. Unlike altruism that is inclined to place an over value on other people and things or egoism that is inclined to place an overemphasis on oneself, objective thinking involves making an assessment to derive the balance value and significance of both.
Differentiation of the individual is an important part of achieving balanced, objective reasoning. Differentiation during the individual’s development means the separation of different spheres, in particular, the separation of self from others and other things, so as to not be as psychologically and emotionally attached to them; which results in one placing an over evaluation on them.3
This distinction between subjective and objective thinking can be understood as a relationship between a system and its environment. When something is subjective it is a point of view, it is not based upon some logic within the environment. When thinking is objective it is true regardless of points of view as it is based upon the logical set of relationships within the environment. If one’s ultimate aim is not to adapt and conform one’s reasoning to some larger logic within the environment then the reasoning will invariably be flawed.
Objective thinking is about following standards that will enable us to develop knowledge that reflects the logic of our environment. It does not matter if one makes a mistake during reasoning, what is important is that we have the frameworks in place that make reasoning responsive to some broader environmental context, so that we can identify when our reasoning is misaligned. What is important is that our conceptual system can adapt and change in response to the logic of some broader context. The logic of one’s thinking must conform to the logic of the environment for it to be successful and the objective standards of reasoning create the framework for conforming our reasoning to the logic within our broader environment.
A central question in the distinction between objective and subjective thinking is then, whether the individual adapts reality to their thinking or do they accommodate and adapt their thinking to reality? Critical thinking and science aim to create standards so that we accommodate our thinking to the logic of the world around us, not the other way round; because this is the only effective solution in the long term. Trying to shape reality to fit into our misconstructed conception of it will only last for a certain period of time as the conceptual system at some stage will be forced to deal with the reality of its environment.
This can be understood with reference to systems theory. All systems are dependent on their environment in some way, when the individual miss constructs the environment they degrade its state and that degradation over time leads to a reduction in the required input for the system, which ultimately means that it will not have the resources required to sustain itself and will eventually disintegrate and be faced with the degraded state of its environment. In order to avoid this, it is necessary to focus on maintaining standards that ensure that we conform our beliefs and conceptual system to the logic within our broader environment. In so doing we show respect for something greater than ourselves and we also sustain that environment; making possible long-term cumulative development.
Subjective thinking leads the individual to conclusions that are in some way desirable to that subject. The world though, does not always turn out to be how people would like it to be, in fact, it has no regard whatsoever for how people would like it to be and thus often turns out to be something different. To believe something that is an undesirable logic requires discipline and living by the standards of reasoning.
All thinking is by its nature subjective but by adhering to standards we try to achieve greater objectivity in our thinking. Our minds will tend to take a path of least resistance unless we make a specific high-energy effort to step out of these processes and think in a more clear and logical manner. Cognitive biases of subjective thinking lead us into invalid or fallacious thinking rather than into formal logical ways of thinking. These biases are numerous, pervasive, and can have a very powerful influence on how we think and it requires exerting a concerted effort to overcome them in the form of applying cognitive standards to our thinking.
If one wants to be the best high jumper one has to be disciplined in the usage and training of one’s body in order to again a certain high achievement. The higher one wants to jump the more disciplined one will have to be in the training of the body. The same is true for thinking, there is the same standard, but this time it is not about jumping higher it is about being able to use higher levels of abstraction effectively. One can only do this by structuring the more basic elementary concepts, once one has these basic building block it is then possible to move up to building more complex and abstract patterns out of them. However, one can not move on to higher level reasoning until the basic concepts have been formed, if one tries to do this the reasoning will be floored. Just like building a house out of weak building blocks that will fall apart if we build too high.
In order to make the basic building blocks solid one has to have discipline in their construction so that they are well structured and balanced. What this means in more practical terms is that one has to define things properly. Until a concept has been properly defined it is not possible to use it properly in the building of higher level more abstract structures. Thus we need to employ discipline in our reasoning in order to define things properly. The better we can define things the more solid our reasoning will be and we will be able to build larger, more abstract conceptual systems which in turn means that we can reason effectively about more complex systems. When we properly define things we try to get at their most elementary features that define them as distinct from other entities. Every word is distinct in some way or else it would not exist, it may be very similar to others but never exactly the same in all contexts. If one does not use objective standards in reasoning and define things properly then it will not be possible to see what is the same and what is different and thus we will form false categories – that will not correspond to those within the environment, creating tension and conflict. |
For our Stone Age entry point in Year 3, we used our outdoor learning space to create a fire! We learnt that in the Stone Age humans did not have an easy way to make fire like we have today. In the Stone Age they used fire for cooking and warmth. Fire in the Stone Age could be created using wood and spears made out of stone or two rocks being rubbed together. Fire in the Stone Age was very important however, it could be said that the discovery of fire was one of the most important technological discoveries in human evolution!
Oct 05 2020 |
Plotting Tips: Vertical Lines and Axis Labels
Go to Maple Portal
Plotting the Graph of a Vertical Line
Controlling the Labeling of an Axis
Graphing Efficiently: Return a Command You Can Reuse
Certain graphing techniques are essential in the classroom but are not covered by basic plotting examples in Maple. Here, you will learn three interactive plotting strategies:
plot vertical lines
control axis labels (for instance, labels with multiples of π)
create a graph interactively and then efficiently reuse that graph
These are not complicated techniques, and once you have discovered them you will be able to employ them to make better graphs for classroom presentations and handouts as well as save yourself time and effort.
Commands, context-sensitive operations, and the Plot Builder are discussed. The Plot Builder is Maple's most robust interactive plotting tool. It provides access to a variety of plot types and to the plot options that enable you to customize your graph. As you change the options in the Plot Builder, preview your graph. When you are satisfied, return a plotting command that can be reused and modified. With the Plot Builder, your plots can be easier to read and convey more useful information.
In the xy-plane, the graph of the solution set of the equation x=1 is a vertical line. This line cannot be obtained as the graph of a function, so Maple's plot command will not graph the vertical line without user intervention. Figure 1 shows a portion of this line drawn with the plot command as a line segment. Figure 2 draws this same line segment as the parametric curve xt=1,yt=t.
Figure 1 Graph of the vertical line segment by specifying its endpoints
Figure 2 Parametric plot of the vertical line segment
Clearly, graphing a vertical line segment via the plot command requires significant user intervention and knowledge of Maple syntax. Figure 3 provides a graph of the vertical line segment drawn as an implicit plot with the implicitplot command from the plots package.
Figure 3 Graph of the vertical line x=1 drawn as an implicit plot
For more information on these methods, see plot details and parametric plot
Figures 1 - 3 all require some mathematical insight on the part of the user, who must correspondingly implement some appropriate Maple construct that reflects that view of the vertical line. Now what happens when the user naively tries to plot x=1 interactively via the Context Panel? Select the equation x=1 and from the context-sensitive operations in the Context Panel, select Plots>2-D Implicit Plot>x,?.
Figure 4 Use of a context-sensitive operation to plot the vertical line x=1
The result is shown in Figure 4. You can obtain the graph through the Context Panel as well as by using a command. A final way to graph a vertical line is using the Plot Builder.
First, launch the Plot Builder from the Tools>Assistants menu. In the Expressions section, click the Add button, and enter x=1.
Pressing the Accept button enters the equation x=1 into the Expression box and adds the variable x to the Variables box. See Figure 5.
Figure 5 Specify the expressions and variables
Press OK. The Plot Builder panel appears. For plot type, select 2-D implicit plot. Again, you get the graph of the vertical line. You can use the Plot Builder to further customize the plot, if desired.
Figure 6 The Plot Builder graphs the line as a 2-D implicit plot
Using the Context Panel
Consider the graph of sinx. On the horizontal axis, the default range (x=−2 π..2 π for a trigonometric graph), default tickmarks, and default labels are shown.
Figure 7 The default plot of sinx
If you want to to change the tickmarks and labels on x-axis to whole number multiples of π, you can do so after the graph has been drawn.
Click on the graph to select it, then in the Context Panel choose Axes>Properties.... (You can also access this from the Plot menu.) The Axis Properties box is shown in Figure 8.
Under the Horizontal Axis tab, clear the box for Let renderer choose tickmarks, and instead select Custom Spacing (1.0) and Multiply by Pi. The modified graph is shown in Figure 9.
Figure 8 The Axis Properties
Figure 9 Modified graph with π in the labels
Although creating one plot through the Plot Builder is convenient, if you are creating many plots, this can become a tedious way to create the modifications you want. One tip for more efficient graphing is to use the Plot Builder once, and then extract command with all the settings you selected.
In the Plot Builder, you can opt to show the command used to create the plot. This provides a way to learn the way to specify these options directly to the plot command. If you intend to graph numerous similar functions with the same plot settings, this is an efficient way to do it.
For example, here we plot an ellipse using the Plot Builder, and return the command. We then modify the command to plot two additional graphs.
Use the Context Panel to launch the Plot Builder for your expression. In this example, we will graph x249+y29=1.
In the Plot Builder panel, select 2-D implicit plot for the plot type. Select show command to see the plotting command. To force the axes to use the same scale, under 2-D Options, for scaling select constrained.
The result is shown in Figure 11.
Figure 11 The plot and command returned from Plot Builder
Now, you can copy and paste this command on a new line (ensure you are in 2-D math mode when you paste it) to create a graph. You can modify the command to create variations. For instance, you can graph x29+y29=1 or x249−y29=1 without having to go through the Plot Builder steps again.
1. Graph of x29+y29=1, a circle.
plots:-implicitplotx29+y29=1, x = −10.0 .. 10.0, y = −10.0 .. 10.0, scaling = constrained
2. Graph of x249−y29=1, a hyperbola.
plots:-implicitplotx249−y29=1, x = −10.0 .. 10.0, y = −10.0 .. 10.0,scaling = constrained
plot, plot/options, plot/parametric, plot/tickmarks, plots[implicitplot], plottools[line], trig
Go to Maple Portal
Download Help Document
What kind of issue would you like to report? (Optional) |
What Is Heavy Industry?
Heavy industry relates to a type of business that typically carries a high capital cost (capital-intensive), high barriers to entry, and low transportability. The term "heavy" refers to the fact that the items produced by "heavy industry" used to be products such as iron, coal, oil, ships, etc. Today, the reference also refers to industries that disrupt the environment in the form of pollution, deforestation, etc.
- Heavy industry is a type of business that involves large-scale undertakings, big equipment, large areas of land, high cost, and high barriers to entry.
- It contrasts with light industry, or production that is small-scale can be completed in factories or small facilities, costs less, and has lower barriers to entry.
- Heavy industry tends to be cyclical, benefiting from the start of an economic upturn as investments are made into more expensive, longer-term projects, such as buildings, aerospace, and defense products.
- Heavy industry tends to sell what it produces to other industrial customers, versus the end customer, making it a part of the supply chain of other products.
Understanding Heavy Industry
Heavy industry typically involves large and heavy products or large and heavy equipment and facilities (such as heavy equipment, large machine tools, and huge buildings); or complex or numerous processes. Because of those factors, heavy industry involves higher capital intensity than light industry does. Heavy industry is also often more heavily cyclical in investment and employment.
Products that result from heavy industry tend to be large in size and low in terms of transportability.
How Heavy Industry Works
Transportation and construction, along with their upstream manufacturing supply businesses, comprised most heavy industry throughout the industrial age, along with some capital-intensive manufacturing. Traditional examples from the Industrial Revolution through the early 20th century included steelmaking, artillery production, locomotive erection, machine tool building, and the heavier types of mining.
When the chemical industry and electrical industry developed, they involved elements of both heavy industry and light industry, which was soon also true for the automotive industry and the aircraft industry. Heavy industry shipbuilding became the norm as steel replaced wood in modern shipbuilding. Large systems are often characteristic of heavy industry, such as the construction of skyscrapers and large dams during the post-World War II era, and the manufacture/deployment of large rockets and giant wind turbines through the 21st century.
Another trait of heavy industry is that it most often sells its goods to other industrial customers, rather than to the end consumer. Heavy industries tend to be a part of the supply chain of other products. As a result, their stocks will often rally at the beginning of an economic upturn and are often the first to benefit from an increase in demand.
Heavy Industry in Asia
The economies of many East Asian countries are based on heavy industry. Among such Japanese and Korean firms, many are manufacturers of aerospace products and defense contractors. Examples include Japan's Fuji Heavy Industries and Korea's Hyundai Rotem, a joint project of Hyundai Heavy Industries and Daewoo Heavy Industries.
In the 20th century, Asian communist states often focused on heavy industry as an area for large investments in their planned economies. This decision was motivated by fears of failing to maintain military parity with foreign powers. For example, the Soviet Union's manic industrialization in the 1930s, with heavy industry as the favored emphasis, sought to bring its ability to produce trucks, tanks, artillery, aircraft, and warships up to a level that would make the country a great power. |
Special Relativity/Principle of Relativity
The principle of relativityEdit
Principles of relativity address the relationship between observations made at different places. This problem has been a difficult theoretical challenge since the earliest times and involves physical questions such as how the velocities of objects can be combined and how influences are transmitted between moving objects.
One of the most fruitful approaches to this problem was the investigation of how observations are affected by the velocity of the observer. This problem had been tackled by classical philosophers but it was the work of Galileo that produced a real breakthrough. Galileo (1632), in his Dialogues Concerning the Two Chief World Systems, considered observations of motion made by people inside a ship who could not see the outside:
- "have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still. "
According to Galileo, if the ship moved smoothly then someone inside it would be unable to determine whether they were moving. If people in Galileo's moving ship were eating dinner they would see their peas fall from their fork straight down to their plate in the same way as they might if they were at home on dry land. The peas move along with the people and do not appear to the diners to fall diagonally. This means that the peas continue in a state of uniform motion unless someone intercepts them or otherwise acts on them. It also means that simple experiments that the people on the ship might perform would give the same results on the ship or at home. This concept led to “Galilean Relativity” in which it was held that things continue in a state of motion unless acted upon and that the laws of physics are independent of the velocity of the laboratory.
This simple idea challenged the previous ideas of Aristotle. Aristotle had argued in his Physics that objects must either be moved or be at rest. According to Aristotle, on the basis of complex and interesting arguments about the possibility of a 'void', objects cannot remain in a state of motion without something moving them. As a result Aristotle proposed that objects would stop entirely in empty space. If Aristotle were right the peas that you dropped whilst dining aboard a moving ship would fall in your lap rather than falling straight down on to your plate. Aristotle's idea had been believed by everyone so Galileo's new proposal was extraordinary and, because it was nearly right, became the foundation of physics.
Galilean Relativity contains two important principles: firstly it is impossible to determine who is actually at rest and secondly things continue in uniform motion unless acted upon. The second principle is known as Galileo’s Law of Inertia or Newton's First Law of Motion.
- Aristotle (350BC). Physics. http://classics.mit.edu/Aristotle/physics.html
Until the nineteenth century it appeared that Galilean relativity treated all observers as equivalent no matter how fast they were moving. If you throw a ball straight up in the air at the North Pole it falls straight back down again and this also happens at the equator even though the equator is moving at almost a thousand miles an hour faster than the pole. Galilean velocities are additive so that the ball continues moving at a thousand miles an hour when it is thrown upwards at the equator and continues with this motion until it is acted on by an external agency.
This simple scheme became questioned in 1865 when James Clerk Maxwell discovered the equations that describe the propagation of electromagnetic waves such as light. His equations showed that the speed of light depended upon constants that were thought to be simple properties of a physical medium or “aether” that pervaded all space. If this were the case then, according to Galilean relativity, it should be possible to add your own velocity to the velocity of incoming light so that if you were travelling at a half the speed of light then any light approaching you would be observed to be travelling at 1.5 times the speed of light in the aether. Similarly, any light approaching you from behind would strike you at 0.5 times the speed of light in the aether. Light itself would always go at the same speed in the aether so if you shone a light from a torch whilst travelling at high speed the light would plop into the aether and slow right down to its normal speed. This would spoil Galileo's Relativity because all you would need to do to discover whether you were in a moving ship or on dry land would be to measure the speed of light in different directions. The light would go slower in your direction of travel through the aether and faster in the opposite direction.
If the Maxwell equations are valid and the simple classical addition of velocities applies then there should be a preferred reference frame, the frame of the stationary aether. The preferred reference frame would be considered the true zero point to which all velocity measurements could be referred.
Special relativity restored a principle of relativity in physics by maintaining that Maxwell's equations are correct but that classical velocity addition is wrong: there is no preferred reference frame. Special relativity brought back the interpretation that in all inertial reference frames the same physics is going on and there is no phenomenon that would allow an observer to pinpoint a zero point of velocity. Einstein preserved the principle of relativity by proposing that the laws of physics are the same regardless of the velocity of the observer. According to Einstein, whether you are in the hold of Galileo's ship or in the cargo bay of a space ship going at a large fraction of the speed of light the laws of physics will be the same.
Einstein's idea shared the same philosophy as Galileo's idea, both men believed that the laws of physics would be unaffected by motion at a constant velocity. In the years between Galileo and Einstein it was believed that it was the way velocities simply add to each other that preserved the laws of physics but Einstein adapted this simple concept to allow for Maxwell's equations.
Frames of reference, events and transformationsEdit
Before proceeding further with the analysis of relative motion the concepts of reference frames, events and transformations need to be defined more closely.
Physical observers are considered to be surrounded by a reference frame which is a set of coordinate axes in terms of which position or movement may be specified or with reference to which physical laws may be mathematically stated.
An event is something that happens independently of the reference frame that might be used to describe it. Turning on a light or the collision of two objects would constitute an event.
Suppose there is a small event, such as a light being turned on, that is at coordinates in one reference frame. What coordinates would another observer, in another reference frame moving relative to the first at velocity along the axis assign to the event? This problem is illustrated below:
What we are seeking is the relationship between the second observer's coordinates for the event and the first observer's coordinates for the event . The coordinates refer to the positions and timings of the event that are measured by each observer and, for simplicity, the observers are arranged so that they are coincident at . According to Galilean Relativity:
This set of equations is known as a Galilean coordinate transformation or Galilean transformation.
These equations show how the position of an event in one reference frame is related to the position of an event in another reference frame. But what happens if the event is something that is moving? How do velocities transform from one frame to another?
The calculation of velocities depends on Newton's formula: . The use of Newtonian physics to calculate velocities and other physical variables has led to Galilean Relativity being called Newtonian Relativity in the case where conclusions are drawn beyond simple changes in coordinates. The velocity transformations for the velocities in the three directions in space are, according to Galilean relativity:
where are the velocities of a moving object in the three directions in space recorded by the second observer and are the velocities recorded by the first observer and is the relative velocity of the observers. The minus sign in front of the means the moving object is moving away from both observers.
This result is known as the classical velocity addition theorem and summarises the transformation of velocities between two Galilean frames of reference. It means that the velocities of projectiles must be determined relative to the velocity of the source and destination of the projectile. For example, if a sailor throws a stone at 10 km/hr from Galileo's ship which is moving towards shore at 5 km/hr then the stone will be moving at 15 km/hr when it hits the shore.
In Newtonian Relativity the geometry of space is assumed to be Euclidean and the measurement of time is assumed to be the same for all observers.
The derivation of the classical velocity addition theorem is as follows.
If the Galilean transformations are differentiated with respect to time:
But in Galilean relativity, and so therefore:
If we write etc. then:
The postulates of special relativityEdit
The previous section described transformations from one frame of reference to another, using the simple addition of velocities that was introduced in Galileo's time. These transformations are consistent with Galileo's main postulate, which was that the laws of physics would be the same for all inertial observers, so that no-one could tell who was at rest. Aether theories had threatened Galileo's postulate, because the aether would be at rest and observers could determine that they were at rest simply by measuring the speed of light in the direction of motion. Einstein preserved Galileo's fundamental postulate, that the laws of physics are the same in all inertial frames of reference. But to do so, he had to introduce a new postulate, that the speed of light would be the same for all observers. These postulates are listed below:
1. First postulate: the principle of relativity
Formally: the laws of physics are the same in all inertial frames of reference.
Informally: every physical theory should look the same mathematically to every inertial observer. Experiments in a physics laboratory in a spaceship or planet orbiting the sun and galaxy will give the same results, no matter how fast the laboratory is moving.
2. Second postulate: the invariance of the speed of light
Formally: the speed of light in free space is a constant in all inertial frames of reference.
Informally: the speed of light in a vacuum, commonly denoted c, is the same for all inertial observers; is the same in all directions; and does not depend on the velocity of the object emitting the light.
Using these postulates, Einstein was able to calculate how the observation of events depends upon the relative velocity of observers. He was then able to construct a theory of physics that led to predictions such as the equivalence of mass and energy and early quantum theory.
Einstein's formulation of the axioms of relativity is known as the electrodynamic approach to relativity. It has been superseded in most advanced textbooks by the space-time approach, in which the laws of physics themselves are due to symmetries in space-time and the constancy of the speed of light is a natural consequence of the existence of space-time. However, Einstein's approach is equally valid and represents a tour de force of deductive reasoning; one that provided the insights required for the modern treatment of the subject.
Einstein's Relativity - the electrodynamic approachEdit
Einstein asked how the lengths and times that are measured by the observers might need to vary if both observers found that the speed of light was constant. He looked at the formulae for the velocity of light that would be used by the two observers, and , and asked what constants would need to be introduced to keep the measurement of the speed of light at the same value even though the relative motion of the observers meant that the axis was continually advancing. His working is shown in detail in the appendix. The result of this calculation is the Lorentz Transformation Equations:
Where the constant . These equations apply to any two observers in relative motion but note that the sign within the brackets changes according to the direction of the velocity - see the appendix.
The Lorentz Transformation is the equivalent of the Galilean Transformation with the added assumption that everyone measures the same velocity for the speed of light no matter how fast they are travelling. The speed of light is a ratio of distance to time (ie: metres per second) so for everyone to measure the same value for the speed of light the length of measuring rods, the length of space between light sources and receivers and the number of ticks of clocks must dynamically differ between the observers. So long as lengths and time intervals vary with the relative velocity of two observers (v) as described by the Lorentz Transformation the observers can both calculate the speed of light as the ratio of the distance travelled by a light ray divided by the time taken to travel this distance and get the same value.
Einstein's approach is "electrodynamic" because it assumes, on the basis of Maxwell's equations, that light travels at a constant velocity. As mentioned above, the idea of a universal constant velocity is strange because velocity is a ratio of distance to time. Do the Lorentz Transformation Equations hide a deeper truth about space and time? Einstein himself gives one of the clearest descriptions of how the Lorentz Transformation equations are actually describing properties of space and time itself. His general reasoning is given below.
If the equations are combined they satisfy the relation:
Einstein (1920) describes how this can be extended to describe movement in any direction in space:
Equation (2) is a geometrical postulate about the relationship between lengths and times in the universe. It suggests that there is a constant s such that:
This equation was recognised by Minkowski as an extension of Pythagoras' Theorem (ie: ), such extensions being well known in early twentieth century mathematics. What the Lorentz Transformation is telling us is that the universe is a four dimensional spacetime and as a result there is no need for any "aether". (See Einstein 1920, appendices, for Einstein's discussion of how the Lorentz Transformation suggests a four dimensional universe but be cautioned that "imaginary time" has now been replaced by the use of "metric tensors").
Einstein's analysis shows that the x-axis and time axis of two observers in relative motion do not overlie each other, The equation relating one observer's time to the other observer's time shows that this relationship changes with distance along the x-axis ie:
This means that the whole idea of "frames of reference" needs to be re-visited to allow for the way that axes no longer overlie each other.
- Einstein, A. (1920). Relativity. The Special and General Theory. Methuen & Co Ltd 1920. Written December, 1916. Robert W. Lawson (Authorised translation). http://www.bartleby.com/173/
Inertial reference framesEdit
The Lorentz Transformation for time involves a component which results in time measurements being different along the x-axis of relatively moving observers. This means that the old idea of a frame of reference that simply involves three space dimensions with a time that is in common between all of the observers no longer applies. To compare measurements between observers the concept of a "reference frame" must be extended to include the observer's clocks.
An inertial reference frame is a conceptual, three-dimensional latticework of measuring rods set at right angles to each other with clocks at every point that are synchronised with each other (see below for a full definition). An object that is part of, or attached to, an inertial frame of reference is defined as an object which does not disturb the synchronisation of the clocks and remains at a constant spatial position within the reference frame. The inertial frame of reference that has a moving, non-rotating body attached to it is known as the inertial rest frame for that body. An inertial reference frame that is a rest frame for a particular body moves with the body when observed by observers in relative motion.
This type of reference frame became known as an "inertial" frame of reference because, as will be seen later in this book, each system of objects that are co-moving according to Newton's law of inertia (without rotation, gravitational fields or forces acting) have a common rest frame, with clocks that differ in synchronisation and rods that differ in length, from those in other, relatively moving, rest frames.
There are many other definitions of an "inertial reference frame" but most of these, such as "an inertial reference frame is a reference frame in which Newton's First Law is valid" do not provide essential details about how the coordinates are arranged and/or represent deductions from more fundamental definitions.
The following definition by Blandford and Thorne(2004) is a fairly complete summary of what working physicists mean by an inertial frame of reference:
"An inertial reference frame is a (conceptual) three-dimensional latticework of measuring rods and clocks with the following properties: (i ) The latticework moves freely through spacetime (i.e., no forces act on it), and is attached to gyroscopes so it does not rotate with respect to distant, celestial objects. (ii ) The measuring rods form an orthogonal lattice and the length intervals marked on them are uniform when compared to, e.g., the wavelength of light emitted by some standard type of atom or molecule; and therefore the rods form an orthonormal, Cartesian coordinate system with the coordinate x measured along one axis, y along another, and z along the third. (iii ) The clocks are densely packed throughout the latticework so that, ideally, there is a separate clock at every lattice point. (iv ) The clocks tick uniformly when compared, e.g., to the period of the light emitted by some standard type of atom or molecule; i.e., they are ideal clocks. (v) The clocks are synchronized by the Einstein synchronization process: If a pulse of light, emitted by one of the clocks, bounces off a mirror attached to another and then returns, the time of bounce as measured by the clock that does the bouncing is the average of the times of emission and reception as measured by the emitting and receiving clock: .¹
¹For a deeper discussion of the nature of ideal clocks and ideal measuring rods see, e.g., pp. 23-29 and 395-399 of Misner, Thorne, and Wheeler (1973)."
Special Relativity demonstrates that the inertial rest frames of objects that are moving relative to each other do not overlay one another. Each observer sees the other, moving observer's, inertial frame of reference as distorted. This discovery is the essence of Special Relativity and means that the transformation of coordinates and other measurements between moving observers is complicated. It will be discussed in depth below.
- Blandford, R.D. and Thorne, K.S.(2004). Applications of Classical Physics. California Institute of Technology. See: http://www.pma.caltech.edu/Courses/ph136/yr2004/ |
Last modified: September 14, 2020
09/30 – 10/04
Solar System: Properties of the Planets- Students will explore the different planets’ profiles by taking a close look at the different properties of each of the planets, including mass, distance from the sun, temperature, revolution period, etc. and determine where there are patterns and correlations between the different planets. Students will research the different types of technology being used to learn about space.
Home work due on Friday, the 4th.
Assessment: We’ll have the unit 3 assessment on Friday, the 4th.
The importance of Religion in Ancient Civilization Culture, part I: Students will be able to identify general beliefs from the religions; Buddhism, Hinduism, Christianity, Judaism and Islam and give examples of how they are represented through traditions, holidays, sacred texts and places.
Finish Reading: “El universo y la creación del mundo” (The universe and the creation of the world).
Unit Question: What do we learn when we observe nature?
Vocabulary: Astrónomos, ciencia ficción, colapsan, colisionar, compacto, galaxia.
Language Arts: Nouns and its plurals.
Verbs: ‘Ser’ and ‘Ir’. |
The writing process at Warden Park Primary Academy is planned around writing for a real purpose and knowing your audience. The children will be taught how to write by thinking about their audience and how we want them to feel. They will use real-life contexts; their own experiences; topical issues; high quality texts; immersive experiences; trips and visitors and also their own passions and interests to drive their writing.
The children will have the opportunity to use technology to enhance their digital literacy skills and to provide a range of ways of sharing their words with their audience. They will be taught to use aspirational and engaging vocabulary as well as the mechanics of handwriting and spelling to aid fluency. Writing goes hand in hand with reading; your child’s classroom will be filled with high-quality fiction, non-fiction and poetry and they will also have the opportunity to visit the library regularly too.
Writing at primary school
Learning to write is one of the most important things that a child at primary school will learn. Children use their writing in almost all other subjects of the curriculum. Good writing also gives children a voice to share their ideas with the world.
For a child, learning to write can be a tricky business, not least because good writing involves handwriting, spelling, grammar and punctuation not to mention what we want to write and who we are writing for.
Writing in the National Curriculum in England
Early Years Foundation Stage (EYFS)
In the Nursery children are given lots of opportunities to develop their mark-making inside and outside. Adults will model writing for a purpose. Following a 'stage not age' approach children will start to write their names and match the spoken sound with the written letter. In Reception, children will continue to be taught correct letter formation. They will be encouraged to use their knowledge of phonics to write words in ways which match their spoken sounds. By the end of the year, they will be expected to write simple sentences which can be read by themselves and others.
Key Stage 1 (Years 1 and 2)
In Year 1, children will be taught to write sentences by saying out loud what they are going to write about, put several sentences together and re-read their writing to check it makes sense. They will also be expected to discuss what they have written and to read it aloud.
In Year 2, children learn to write for a range of purposes, including stories, information texts and poetry. Children are encouraged to plan what they are going to write and to read through their writing to make corrections and improvements.
Key stage 2 (Years 3 to 6)
In Years 3 and 4, children are encouraged to draft and write by talking about their writing. They will continue to learn how to organise paragraphs and, if they are writing non-fiction, to use headings. When they are writing stories, they will learn to use settings, characters and plots. Children in Years 3 and 4 will be expected to use what they know about grammar in their writing and to read through what they have written, to find ways to improve it.
In Years 5 and 6, children will continue to develop their skills in planning, drafting and reviewing what they have written. Children learn to identify the audience for and purpose of their writing. They will be expected to use grammar appropriately. In non-fiction writing, children will use headings, bullet points and other ways to organise their writing. They will be expected to describe settings, characters and to use dialogue in their stories.
Encouraging writing at home
7 Ideas to Encourage Writing
- Build up the strength in your child's hands and fingers i.e. playdoh, cutting, messy play
- Ditch the pen and paper - go outside and make writing big and messy!
- Play with magnet letters i.e. find the same letter, be a letter detective and find letters in your story books.
- Encourage your child to write their own books.
- Ask for help with the shopping list.
- Make an alphabet book.
- Write to friends and relatives. |
A new skeleton discovered in the submerged caves at Tulum sheds new light on the earliest settlers of Mexico, according to a study published in the open-access journal PLOS ONE by Wolfgang Stinnesbeck from Universität Heidelberg, Germany.
Humans have been living in Mexico’s Yucatán Peninsula since at least the Late Pleistocene (126,000-11,700 years ago). Much of what we know about these earliest settlers of Mexico comes from nine well-preserved human skeletons found in the submerged caves and sinkholes near Tulum in Quintana Roo, Mexico.
Here, Stinnesbeck and colleagues describe a new, 30 percent-complete skeleton, ‘Chan Hol 3’, found in the Chan Hol underwater cave within the Tulum cave system. The authors used a non-damaging dating method and took craniometric measurements, then compared her skull to 452 skulls from across North, Central, and South America as well as other skulls found in the Tulum caves.
The analysis showed Chan Hol 3 was likely a woman, approximately 30 years old at her time of death, and lived at least 9,900 years ago. Her skull falls into a mesocephalic pattern (neither especially broad or narrow, with broad cheekbones and a flat forehead), like the three other skulls from the Tulum caves used for comparison; all Tulum cave skulls also had tooth caries, potentially indicating a higher-sugar diet. This contrasts with most of the other known American crania in a similar age range, which tend to be long and narrow, and show worn teeth (suggesting hard foods in their diet) without cavities.
Though limited by the relative lack of archaeological evidence for early settlers across the Americas, the authors suggest that these cranial patterns suggest the presence of at least two morphologically different human groups living separately in Mexico during this shift from the Pleistocene to the Holocene (our current epoch).
The authors add: “The Tulúm skeletons indicate that either more than one group of people reached the American continent first, or that there was enough time for a small group of early settlers who lived isolated on the Yucatán peninsula to develop a different skull morphology. The early settlement history of America thus seems to be more complex and, moreover, to have occurred at an earlier time than previously assumed.” |
Open Educational Resources (OER)
There are several challenges related to the selection and implementation of open educational resources (OER). First, districts find it difficult to build educator awareness of OER and to help schools and teachers understand how to best select, use, and implement OER. Second, it is challenging for educators to easily find appropriate OER aligned with their curriculum and targeted to their instructional needs, and to assess the quality of OER. Additionally, schools must overcome technology limitations that inhibit OER use and surfacing. Finally, districts are grappling with how to communicate the value of OER to teachers and families and to scale the use of OER across their district as they adopt it.
Educators responded this challenge is widespread
Respondents reported their schools or districts have made progress on this challenge
Ideas from the Field
- With seemingly endless open resources, many school and district leaders are slowly building approved libraries of worthwhile resources, and often in an educator-led fashion.
- Another approach is to build OER assets oneself, whether that’s lessons, video, or larger curricula. This tactic can mean using grant-funding and local educators to develop the kinds of resources that are needed but currently difficult to find, and then pursuing a thoughtful process for sharing and scaling.
Related Innovation Portfolios
Building a culture where teachers are the producers of their own contentHiring a videographer and allocating funds to creating educator-originated curriculum resources- Mineola Union Free School District
Open Education Resources (OER) adoption is a game-changerTransitioning several core curricula for math, science, social studies and ELA- Sunnyside Unified School District
In the Words of District Staff
“I think the idea that we're selling is integrity to the resources, not fidelity to the resources. If [teachers are] going to replace something, then they have to show that what they're replacing it with has an equal level of rigor to support the standards being taught in that set of resources.”
OER Commons - OER Commons is a public digital library of open educational resources. Explore, create, and collaborate with educators around the world to improve curriculum.
OER in K-12 Education - What does the research tell us? - From CCSSO, this blog unpacks the research about what we understand about OER and where we need to do more work to fill in knowledge gaps that will keep the open movement on the right track.
Open Education - From the US Department of Education’s Office of Education Technology, this page describes what is involved in creating an open education ecosystem.
Beyond access: Using open educational resources to improve student learning - From the Hewlett Foundation, this blog describes why school and district goals of effective teaching and learning should drive OER adoption. |
Mohammad Yasir is a physics graduate from the University of Delhi and currently enrolled in the master's programme at IIT.
An inclined plane is a flat surface that is raised on one end and thus, "inclined" or tilted at an angle with respect to the ground. They come in particularly handy for transporting heavy objects to a raised surface since it would naturally be easier to slide heavy objects on a plane than lift them over a flight of stairs.
However, we are concerned with a physicist's view of the inclined plane, which means that by the end of this article, you'll be able to explain all the forces an object on an inclined plane feels. What's more, you'll also be able to calculate the magnitudes of these forces, given a set of initial values.
How do we Tackle Inclined Plane Motion?
To the uninitiated, inclined plane motion can feel like a nightmare. Worry not, though. As someone who's been playing around with the concepts of physics for the past seven years, I can confirm that with the proper knowledge, it is more a peaceful dream than a torrid nightmare. That said, I do advise you not to let overconfidence or a rushed calculation ruin your solution.
To tackle inclined plane motion, you must first have a problem statement at hand. To make things simpler, I will start with a situation where frictional forces can be ignored. As we will soon see, the opposite scenario will then be a quite simple matter of adding a couple of terms. For now, here is your problem statement:
Calculate the acceleration of an object of mass m, sliding down an inclined plane of angle θ. Consider the inclined plane to be smooth and thus, frictional forces to be zero.
In the image I attached above, let us try to figure out the forces acting on the mass m. These will be:
- The gravitational force mg, acting downwards on the mass m.
- A component of gravitational force parallel to the inclined plane.
- A component of gravitational force perpendicular to the inclined plane.
- The normal reaction exerted by the inclined plane on the mass.
Calculating the Components
As you can see in the initial diagram, the object is moving in a direction parallel to the inclined plane. Which means our actual job is to find the acceleration (or equivalently, the force) acting in this direction. Naturally, since there are no external forces, this force will be calculated in terms of the components of gravitational force.
To that end, here is a little bit of construction I did on our drawing.
In this figure, I have drawn a normal on OB, and represented it by AD. This makes the ∠BAD = 90º. Further, since AC is the direction of the gravitational force, it is already normal to OE. Now, take a look at the following calculations.
Read More From Owlcation
The calculations above might seem daunting at first. But let me remind you to focus on the outcome. We have succeeded, without even diving into any particulars of the problem, in proving that ∠CAD = θ. This serves two purposes:
- Regardless the problem statement, we have proved that the angle between the vertical and the normal on the inclined plane is equal to the angle of the incline itself. We can reuse this result in all inclined plane problems.
- Since we now have the value of ∠CAD, we can now find the components of gravitational force experienced by the mass m.
Indeed, if you recall vector resolution, you can easily see that the perpendicular and parallel components of the gravitational force on our mass m can be written as follows:
So, What Next?
The short answer? Nothing! The honest answer? Everything!
As far as our initial problem statement goes, we have already done what we set out to do. The problem asked us what the acceleration of an object sliding down a smooth incline would be and presto! We got it.
Hold Your Horses! What About Friction?
What we calculated above was a highly idealistic scenario where frictional losses could be ignored. But what if the inclined plane weren't smooth? As it turns out, that is also quite simple. In fact, let us rephrase our problem statement and ask this:
Calculate the acceleration of an object of mass m, sliding an inclined plane with coefficient of friction μ and inclination θ.
As usual, the first step is to draw a free body diagram and represent the forces acting on it. Since the object tends to slide down the incline, the direction of frictional force is up the incline.
Before we can continue, we must find the magnitude of this frictional force. Concepts from mechanics have already told us that frictional force f = μN, where N is the normal reaction. Thus, our job is to find N. And that is all too easy using Newton's second law. Equating forces perpendicular to the incline to zero gives us the following result:
With that done, all that is left to do is apply the second law once more. Only this time around, we'll apply it parallel to the inclined plane. Considering acceleration towards the bottom of the incline plane to be positive, we have:
And We're Done
There we have it. The above equation for acceleration is a general equation of motion for an object sliding down a rough inclined plane of inclination θ and with coefficient of friction μ. In case the inclined plane is smooth, we can simply substitute μ = 0 and arrive at the result we obtained earlier.
Despite the lengthy nature of this article, the concept of inclined plane motion is quite easy to handle. In fact, the sole reason I let this article go this long is because I wanted to tackle the problem in detail. However, as you can see, the final result is worth the effort. We have ourselves, a beautiful equation that you can use to solve inclined plane motion regardless of frictional losses.
To conclude, here is a summary of how to solve problems like these:
- Draw the free body diagram.
- Resolve forces along directions that can prove useful. For example, when there is circular motion, resolve forces along and perpendicular to the radius.
- Apply Newton's second law in each direction to equate the acceleration of the mass with the resultant forces. This will give us the equation of motion.
- Use the equation of motion obtained above to calculate further if required.
This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional.
© 2022 Mohammad Yasir |
Tetanus is a serious bacterial infection that affects the nervous system. It is caused by a bacteria which lives in environment. One is at risk of getting tetanus if they sustain a deep or very dirty wound. It is rare to see tetanus illness because of the tetanus vaccine, however, the vaccine needs to be given every ten years since the antibody levels eventually diminish. In order to establish whether someone is immune to the bacteria which causes tetanus, a tetanus titer test can be performed. This test measures the amount of tetanus IgG in the blood. From this, immunity and need for booster vaccination can be determined.
This test is performed in following conditions such as:
• Redness or discoloration
• Pain, tenderness
• Deep wound
• Dirty wound
Also Known As: Tetanus Toxin Antibodies
The Tetanus Titer test has no special requirements.
Estimated Time Taken
Turnaround time for this test is typically 2-3 business days. |
We learnt about bugs. In the Summer term we will go for a Bug Hunt in the school grounds. We learned the names of these creatures and talked about how they moved. We used verbs and adverbs.
The children were learning about hedgehogs from their class teachers and we learned about hedgehogs too. Did you know hedgehogs can climb and can swim.
Iga and Alan taught teacher the Polish for hedgehog: jeż. We also learned about other animals; their names, where they live, what their babies are called and how they move.
As we were doing these activities we were learning vocabulary and grammar. As we went from Junior Infants, through Senior Infants and on into First Class, we did more work on grammar, particularly verbs and adverbs, tenses, ‘time markers’ and prepositions. You can read about all the activities we have done in EAL and Language HERE. |
Iowa Legal Update
2021 Lesson 13: Diverse Communities – Part 2
This is a part 2 of a two-part lesson focused on diverse communities, cultural competence, bias prevention, and a history of the American civil rights movement.
Section three provides a summary of certain noteworthy people and events that are part of the history of the American civil rights movement, including: slavery; abolition; reconstruction; the Jim Crow era; lynchings; school segregation; the Great Migration; World War I and the Red Summer; mass violence against Black Americans; civil rights protests of the 1960s; Rosa Parks; Dr. Martin Luther King Jr.; Malcom X; the Little Rock Nine; the Freedom Riders; and other people and events.
Section four examines paths for self-evaluation and engaging with diverse communities. It explains how individuals can examine how culture has influenced their own perspectives and that of others. It explains how law enforcement agencies can practice cultural competence and how an agency might use a racial equity toolkit. |
Water, after all, is an unavoidable aspect of our lives. It is that essential element that is important in our day-to-day life. Without it, our very existence will be jeopardized. Polluted water can be very much harmful not only to the environment but also to human health.
Process of Effluent Water Treatment Plant
Water treatment is extremely essential in order to ensure cleaner and safer water. In this article, you shall know about the process of the Effluent Treatment Plant.
The process of effluent water treatment plants is a bit complex. Let us know what the process is:
The goal of this step is to physically separate big pollutants and solid materials. Fabric, paper, polymers, and wood planks, for example. This level/process includes the following:
The first unit activity in a wastewater treatment plant is screening. The objective of a screen is to remove big floating solids by having regular apertures. It removes the big solids from water for collecting the required amount of water easily.
In this process, water moves slowly, forcing heavy particles to settle to the bottom. The particles that are settled at the bottom of a container are referred to as sludge. This helps to provide cleaner water as the contaminants or dust particles settle at the bottom of the container.
Clarifiers are tanks with mechanical mechanisms for continuously removing sediments that have been accumulated due to sedimentation before biological treatment.
Its main goal is to remove floating and settleable elements such as scattered solids, organic and inorganic compounds from the water. Physical and chemical procedures are used in this treatment. It contains the following items:
Coagulation is a process that involves mixing untreated water with liquid aluminum sulfate. After mixing, small dirt particles cling together as a result of this. This assemblage of particles combines to form larger, relatively high numbers of particles that are easily removed via settling and filtration.
Flocculation is a process that does not result in the neutralization of charge. It entails aggregating destabilized particles into big aggregates so that they may be easily removed from the water.
The pH of the effluent should be between 5.5 and 9.0. To change the pH of wastewater, pH neutralization is utilized.
Acidic waste (low pH): Bases are used to change the pH of a solution. Acids are used to change the pH of a solution in the event of alkali waste (high pH).
It is used to restrict the flow of water to the point where organic materials sink to the bottom of the tank, and they also contain equipment for removing drifting solids and lubricates from the surface.
Secondary or Biological Treatment:
The goal of this treatment is to remove suspended particles and leftover organic pollutants from the effluent from the first treatment. Biological and chemical mechanisms are involved in this stage.
It is a treatment pond with artificial aeration to promote wastewater biological oxidation.
Activated Sludge Process
The Activated Sludge Process uses air and a biological liquid droplet made up of bacteria to clean industrial wastewater. It ensures germ-free water.
Also known as sprinkling filters, trickling filters are extensively used for the biological treatment of home sewage and industrial wastewater.
The purpose of tertiary treatment is to provide a final treatment stage to raise the effluent quality to the desired level before it is reused, recycled, or discharged to the environment.
The main objective of the equalization tank is to equalize the raw effluent from various processing units. The wastewater is deposited in a mixed effluent tank and sent to an aeration tank that also serves as an equalization tank. The effluent is homogenized in the floating aerator before being pumped to the neutralization tank for further treatment.
Water is disinfected before entering the distribution system. Water is disinfected and decontaminated with chlorine. This process ensures germ-free and pure water and also ensures the good health of the people.
Filtration is the technique of removing particles from water by passing them through a filter. The filters are built up of layers of sand and gravel. It is a very important technique in order to ensure cleaner water. Cleaning these filters on a regular basis necessitates water application.
Importance of an Effluent Treatment Plant
Effluent Treatment Plant ensures that toxic and contaminated industrial water is treated and made reusable before being released back into the environment. People will be unable to acquire clean, usable water for household or other purposes without this treatment. This treatment lets people use water which is germ free, clean and safe.
It is necessary for ensuring a healthier environment and good health for the people. This helps to get rid of harmful chemicals and potential diseases. It helps to protect the ecological balance of the environment by disposing wastewater used in the industries or factories by making them pollutant free making it less prone to any sort of harmful diseases.
Why choose ABM Water Company for an Effluent treatment Plant?
ABM water company is one of the best water treatment plants in Bangladesh. We offer consultancy for Water Treatment. ABM Water are well experienced in complete engineering for designing any type of water treatment plant with processes like clarification, filtration, disinfection, demineralization, and membrane processes.
We have an experienced, sincere and hardworking team of professionals with us who are committed to meet all client requirements. Our technically advanced systems are custom-engineered to suit any application, thus providing the most economical and efficient solution on the market.
We have pioneered some of the most cutting-edge water treatment technology advances including, membrane-based technology, ultraviolet disinfection system, advanced micro-filtration, media-based filtration and chlorination.
Our aim is to conserve and optimally utilize water resources. We take these issues as a serious challenge and continuously strive to fulfill our client’s expectations to the fullest. You will undoubtedly have the best service by purchasing from our company. |
It is important to teach your children about eye health and safety from a young age. This includes awareness about how your overall health habits affect your eyes and vision as well as how to keep your eyes safe from injury and infection. Starting off with good eye habits at a young age will help to create a lifestyle that will promote eye and vision health for a lifetime.
10 Eye Health Tips for All:
- Eat right. Eating a balanced diet full of fresh fruits and vegetables (especially green leafies such as kale, spinach and broccoli) as well as omega-3s found in fish, such as salmon, tuna and halibut, help your eyes get the proper nutrients they need to function at their best.
- Exercise. An active lifestyle has been shown to reduce the risk of developing a number of eye diseases as well as diabetes - a disease which which can result in blindness.
- Don’t Smoke. Smoking has been linked to increased risk of a number of vision threatening eye diseases.
- Use Eye Protection. Protect your eyes when engaging in activities such as sports (especially those that are high impact or involve flying objects), using chemicals or power tools or gardening. Speak to your eye doctor about the best protection for your hobbies to prevent serious eye injuries.
- Wear Shades. Protect your eyes from the sun by wearing 100% UV blocking sunglasses and a hat with a brim when you go outside. Never look directly at the sun.
- Be Aware: If you notice any changes in your vision, always get it checked out. Tell a parent or teacher if your eyes hurt or if your vision is blurry, jumping, double or if you see spots or anything out of the ordinary. Parents, keep an eye on your child. Children don’t always complain about problems seeing because they don't know when their vision is not normal vision. Signs of excessive linking, rubbing, unusual head tilt, or excessively close viewing distance are worth a visit to the eye doctor.
- Don’t Rub! If you feel something in your eye, don’t rub it - it could make it worse or scratch your eyeball. Ask an adult to help you wash the object out of your eye.
- Give Your Eyes a Break. With the digital age, a new concern is kids' posture when looking at screens such as tablets or mobile phones. Prevent your child from holding these digital devices too close to their eyes. The Harmon distance is a comfortable viewing distance and posture - it is the distance from your chin to your elbow. There is concern that poor postural habits may warp a child's growing body. Also, when looking at a tv, mobile or computer screen for long periods of time, follow the 20-20-20 rule; take a break every 20 minutes, for 20 seconds, by looking at something 20 feet away.
- Create Eye Safe Habits. Always carry pointed objects such as scissors, knives or pencils with the sharp end pointing down. Never shoot objects (including toys) or spray things at others, especially in the direction of the head. Be careful when using sprays that they are pointed away from the eyes.
- Keep Them Clean. Always wash your hands before you touch your eyes and follow your eye doctors instructions carefully for proper contact lens hygiene. If you wear makeup, make sure to throw away any old makeup and don’t share with others.
By teaching your children basic eye care and safety habits you are instilling in them the importance of taking care of their precious eye sight. As a parent, always encourage and remind your children to follow these tips and set a good example by doing them yourself.
Of course don’t forget the most important tip of all - get each member of your family's eyes checked regularly by a qualified eye doctor! Remember, school eye screenings and screenings at a pediatrician’s office are NOT eye exams. They are only checking visual acuity but could miss health problems, focusing issues and binocularity issues that are causing health and vision problems. |
No plate tectonics, no high mountain ridges on Mars
It is remarkable that no plate tectonics (= a horizontal movement of the crust under influence of magma) have been observed on Mars. As a result, Mars has neither high mountain ridges nor deep troughs.
Where the Martian crust was thin enough, high volcanoes emerged where magma escaped. Lava also penetrated through fissures in the planet’s crust. There is very little volcanic activity on Mars, so we do not expect to see an eruption any time soon. We cannot, however, exclude hydrothermal activity on the planet.
Greenhouse Effect on Mars is far smaller than on Earth
The absence of plate tectonics also has consequences for the planet’s atmosphere. Chalky rocks absorb carbon dioxide from the atmosphere, as they do on Earth. But active volcanoes on Earth restore this carbon dioxide to the atmosphere. On Mars, however, this does not occur and there is not enough compounds susceptible to produce a noteworthy greenhouse effect in the Martian atmosphere. So the greenhouse effect on Mars is far smaller than on Earth. |
In deciding to use artificial intelligence, the key question for administrators is a comparative one.
Algorithms, at their most basic level, consist of a series of steps used to reach a result. Cooking recipes are algorithms, mathematical equations are algorithms, and computer programs are algorithms.
Today, advanced machine-learning algorithms can process large volumes of data and generate highly accurate predictions. They increasingly drive internet search engines, retail marketing campaigns, weather forecasts, and precision medicine. Government agencies have even begun to adopt machine-learning algorithms for adjudicatory and enforcement actions to enhance public administration.
These uses have also garnered considerable criticism. Critics of machine-learning algorithms have suggested that they are highly complex, inscrutable, prone to bias, and unaccountable. But any realistic assessment of these digital algorithms must acknowledge that government already relies on algorithms of arguably greater complexity and potential for abuse: those that undergird human decision-making. The algorithms underlying human decision-making are already highly complex, inscrutable, prone to bias, and too often unaccountable.
What are these human algorithms? The human brain itself operates through complex neural networks that have inspired developments in machine-learning algorithms. And when humans make collective decisions, especially in government, they operate via algorithms too—many reflected in legislative, judicial, and administrative procedures.
But these human algorithms can often fail. On an individual level, human decision-making falls prey to a multitude of limitations: memory failures, fatigue, availability bias, confirmation bias, and implicit racial and gender biases, among others. For example, working human memory can process only a handful of different variables at any given time.
On an organizational level, humans are prone to groupthink and social loafing, along with other dysfunctionalities. Recent governmental failures—from inadequate responses to COVID-19 to a rushed exit of U.S. forces from Afghanistan—join a long list of other group decision-making failures throughout history—such as the Bay of Pigs crisis and the Challenger Shuttle explosion. One researcher has even estimated that approximately half of all organizational decisions end up in failure.
For these reasons, human decision-making may well be even more prone to suboptimal and inconsistent decisions than their machine-learning counterparts—at least for a nontrivial set of important tasks.
Government agencies ought to aspire for speed, accuracy, and consistency in implementing all their policies and programs. And machine-learning algorithms and automated systems promise to deliver the greater capacity needed for making decisions that are more accurate, consistent, and prompt. As a result, digital algorithms could clear backlogs and reduce delays that arise when governmental decisions depend on human decision-makers. The Internal Revenue Service, for example, has estimated in the first quarter of 2022 that it still faces a backlog in processing as many as 35 million tax returns from 2021—a problem that it is seeking to solve by hiring up to 10,000 more employees to sift through paperwork.
Compared against the evident limitations in human-driven governmental performance, not to mention low levels of public confidence in current governmental systems, machine-learning algorithms appear to be particularly attractive substitutes. They can process large amounts of data to yield surprisingly accurate decisions. Given the high volume of data available to governments, in principle administrators could use these algorithms in a wide range of settings—thereby helping to overcome existing constraints on personnel and budget resources.
In the future, government agencies must choose between maintaining a status quo driven by human algorithms and moving toward a potentially more promising future that relies on digital algorithms.
To ensure that governments can make smarter decisions about when to rely on digital algorithms to automate administrative tasks, public officials must first consider whether a particular use of digital algorithms would likely satisfy basic preconditions needed for these algorithms to succeed. The goals for algorithmic tools need to be clear, such that the social objectives of the contemplated task, and the algorithm’s performance in completing it, can be specified precisely. Administrators must also have relevant and up-to-date data available to support rigorous algorithmic analysis.
But beyond just seeing if these preconditions are met, it will also be important for government decision-makers to validate machine-learning algorithms and other digital systems to ensure that they indeed make improvements over the status quo. They also need to give serious consideration to risks that might be associated with digital algorithms. Administrators must also ensure adequate planning for accountability, proper procurement of private contractor services, and appropriate opportunities for public participation in the design, development, and ongoing oversight of digital algorithmic tools.
Ultimately, when evaluating machine learning in governmental settings, public administrators must act carefully and thoughtfully. But they need not feel that they must guarantee that digital systems will be perfect. Any anticipated shortcomings of artificial intelligence in the public sector must be placed in proper perspective—with digital algorithms compared side-by-side with existing processes that are based on less-than-perfect human algorithms.
This essay draws on the authors’ article, Algorithm vs. Algorithm, in the Duke Law Journal. The opinions set forth in this essay are solely those of the authors and do not represent the views of any other person or institution. |
The British slave trade was then dominated, in turn, by merchants from London Bristol and Liverpool. Until 1725, London led the way, but between 1725 and 1740, Bristol established itself as the major English slaving port. By the 1730s, an average of 39 slave ships left Bristol each year, and between 1739 and 1748, there were 245 slave voyages from Bristol (about 37.6% of the whole British trade). In the last years of the British slave trade, however, Bristol’s share fell to 62 voyages, a mere 3.3% of the trade – compared to Liverpool’s 62% (1,605 voyages).
Spotting commercial potential
Of course, Bristol was a well-established port long before its merchants turned their attention to West Africa. Its location was a great advantage when merchant adventurers began their explorations in the north Atlantic, and along with other western port cities in the 17th and 18th centuries, it benefited greatly from the expansion of transatlantic commerce – of all kinds.
The city’s well-positioned merchants were quick to spot the commercial potential that opened up when the English established their colonies in North America and the West Indies. They were prominent in supplying enslaved Africans to St Kitts (along with Barbados, the pioneering English sugar colony in the Caribbean) and to Virginia (for the tobacco plantations). Indeed, by the mid-17th century, the market in the Virginia tobacco trade had effectively been cornered by merchants from Bristol and London. But by the 1720s, Bristol merchants were shipping twice as many Africans to the Virginia as their London rivals.
They were able to do this because they had forged good trading relations in specific regions of West Africa. Bristol men had important ties, first, in Calabar and then in Bonny (both now part of south-east Nigeria) and in Angola. There, they formed liaisons with local trader dynasties that survived for generations and which were secured by bonds of trust. Such trust was clearly vital in commercial dealings spread over huge distances and over such protracted periods.
Bristol thrived as a city because of its slave trading success. Its urban development– the growth of its handsome physical fabric, similar in many respects to that of the French cities of Nantes and Bordeaux – was directly linked to the slave trade and to Bristol’s general trade to the slave colonies. Similarly, there were important links between Bristol’s slave trade and the growth of the city’s industries, notably copper-smelting, sugar-refining and glass-making.
Ultimately, however, it was Bristol’s relatively less-developed industrial hinterland and its smaller regional population that were to play a part in the city’s eventual relegation behind Liverpool as the nation’s major slaving port. Slower local industrial and population growth meant that Bristol slipped behind Liverpool from the 1740s onwards.
Thereafter, though still a thriving port city, Bristol was no longer the nation’s premier slaving port. But its involvement in, and benefits from, the slave trade can still be seen in all corners of the city’s surviving 18th-century physical fabric. |
“Literacy is not a luxury, it is a right and a responsibility. If our world is to meet the challenges of the twenty-first century we must harness the energy and creativity of all our citizens.”
— President Clinton on International Literacy Day, September 8, 1994
OperationREAD improves schools access to reading materials and encourages students to read outside of the school curriculum as a way of broadening student horizons. The program goes beyond traditional literacy programs that center around collecting, shipping and donating books by working with community partners to implement literacy programming that supports and enhances the learning experience.
- Improve schools' access to books
- Encourage students to read outside of the school curriculum in order to broaden student horizons
- Engage technology in improving literacy and the quality of literacy programming
“No skill is more crucial to the future of a child, or to a democratic and prosperous society, than literacy.”
— Los Angeles Times |
|The Question: We saw many moths and butterflies in La Selva Verde in Costa Rica. This big one has a strange structure! Can you help us with finding a scientific name?
Submitted by: Fred, Belgium
(click on photos and graphics to expand)
The Short Answer: This is Titaea tamerlan, one of the group of moths known as silk moths (Saturniidae). It is found in Central and South America, from Mexico to Peru. The projections at the bottom of the wings identify this as a male. The family Saturniidae contains the world’s largest moths, and Titaea tamerlan is a good representative, with females about 12.5 cm (5 in.) across and males a bit smaller. After mating, females lay 5-10 eggs on the undersides of leaves of trees in the mallow family (Malvaceae), including the kapok tree (Ceiba pentandra), silk floss tree (Ceiba speciosa), and pachote (Pachira quinata). The eggs hatch about six days later and the caterpillars feed on the tree leaves. Over the course of about a month, the caterpillars go through several dramatic changes, beginning thin and spiky and ending plump and smooth. (To see the various stages, you can click here to go to a website that features Costa Rican butterflies and moths.) Eventually, the caterpillars burrow into the ground, where they pupate and later emerge as adult moths. The adult moths only live 6-9 days and don’t eat at all.
Moth or Butterfly: If you’re not sure whether something is a moth or a butterfly, the easy answer is that they are all moths. If you look at the evolutionary tree of the Lepidoptera, the insect order that includes both moths and butterflies, you will see that all the butterflies come off one small side branch of the tree. If you go to the Lepidoptera page of the website Tree of Life, click on Neolepidoptera, and then Ditrysia and you’ll finally see that the butterflies are all in the Papilionoidea, which is just one of 126 families in Lepidoptera. There are somewhere around 15,000-20,000 species of butterflies, out of a total of 174,250 catalogued species of Lepidoptera (moths and butterflies). So butterflies are a large group, but they are still a minority in the Lepidoptera. The truth is that the Lepidoptera is mostly about moths, with a colorful and interesting subset of butterflies.
If you are looking at a moth/butterfly type creature, and you want to know which it is, two rules will generally give you the answer. Butterflies are active during the day and moths are active at night, and butterflies are every color in the rainbow, and moths are generally brown, gray or white. Use those two rules, and you’ll be right most of the time. But there are exceptions. There are moths that are active during the day, and there are beautifully colored moths. There are also drab- colored butterflies that are only active at night. The accurate way for us non-lepidopterists to make a determination is to look at the antennae. The antennae of butterflies are simple and thin, ending in a thicker bulb or club shape. The antennae of moths can be almost anything else, although most of them have a somewhat hairy or fuzzy appearance.
Moths, including the ever popular butterfly subgroup, are an evolutionary success story. Nearly one out of every five animal species that has been classified and described is a moth or butterfly. This percentage is almost certainly skewed upwards by the fact that these creatures are relatively easy to catch and catalogue, but still, there are a lot of moths out there in the world.
Thanks to Ryan St. Laurent, at BugGuide.net for the ID on this moth (although, the BugGuide people warned me that they really only do insects of North America). Thank you to Dr. Richard S. Peigler at the University of the Incarnate Word in San Antonio, TX for his help with information on Titaea tamerlan. And thanks to Dan Janzen, biologist at the University of Pennsylvania, for his help and the great pictures of Titaea tamerlan larvae on his site Caterpillars, pupae, butterflies & moths of the ACG.
Mora, C, Tittensor, D P, Adl, S, et al. (2011). How many species are there on earth and in the ocean? PLoS Biology, 9(8), e1001127. |
The main differences between a black bear and a grizzly bear are in the size of the shoulders, the shape of the face and the length of the claws. Grizzly bears have a distinct shoulder hump that black bears don't have. Grizzly bears also have a concave-shaped facial structure, smaller ears and larger claws, while black bears have a flatter face, larger ears and smaller claws.Continue Reading
Both black bears and grizzly bears range in color and can be found in varying shades between black and blond. Grizzlies are generally much larger than black bears, but size is not a reliable indicator. Some grizzly bears can weigh as little as 250 pounds, and some black bears can weigh up to 800 pounds.
The Palmisciano Line Method can also be used to identify black bears and grizzly bears by the shape of their tracks. This method works by drawing a line from the lowest point of the big toe and the highest point of the edge of the palm pad. The line should be extended to the end of the track. If more than 50 percent of the small toe is above the line, the tracks belong to a grizzly bear. If the opposite, then the tracks are from a black bear.Learn more about Bears |
Although slave masters of the 18th century boasted of how well treated and content their slaves were, life for the enslaved African living in the North was harsh, tedious and unrewarding. The hope of attaining freedom inspired hundreds of slaves to risk the perils of running away and live a fugitive life. Living on the run was dangerous in itself, but if caught, a fugitive slave could expect punishments ranging from flogging, amputation of limbs or death. In May of the year 1775 an Act was passed to prevent slaves from running away to Canada. If convicted of trying to escape to Canada the penalty was "he, she shall suffer the Pains of Death." The owner of said slave would be compensated for their financial loss "not to exceed the sum of five pounds".
Slave masters in some rural areas, Marlborough, specifically, used unusually harsh methods to deter their slaves from running away. According to Charles H. Cochrane's, The History of the Town of Marlborough, Ulster County, New York: From the First Settlement in 1712 by Captain Wm. Bond to 1887:
"Those that could afford it kept slaves, and each owner put a mark upon his black servants, and registered the same with the town clerk, in order that runaways might be more easily traced. For instance the mark of Mathew Wygant was 'a square notch of ha'penny on the upper side of the left ear'. This was previously Abraham Deyo's mark, but in purchasing Deyo's slave or slaves, Wygant evidently adopted it to avoid remarking the poor blacks." (p.85)
Runaway slave notices are some of the earliest clues we have as to the interior lives of slaves living in New York during the 18th and early 19th century. These notices often revealed the ethnicity, work culture, languages spoken and appearances of enslaved Africans. Often a slave's religious beliefs as well as personal habits were noted.
The greatest amount of information found in run-away notices is the outerwear worn by slaves. The clothing stated in these notices reflected the deprived existences they led. Slave masters recall styles of clothing, including the color and material the clothing was made of, hairstyles and types of headwear in great detail. This was crucial information since most fugitive slaves ran away with only one set of clothes.
Some of the comments made by masters in these notices reflect the mistreatment of their slaves. In this notice Alexander Colden, grandson of Cadawaller Colden, Governor of NY, mentions that his slave Peter "fled precipitately from his work in fear of a deserved correction." One can only surmise what the term "correction" might entail.
Many enslaved peoples ran away together, as was the case of Jerrey and Bohenah of Westchester, New York in 1759, bringing along with them a young mother and her three children. No matter their age, sex, or handicap, it is clear the drive to flee the brutality they endured as enslaved peoples was well-founded, and they exhibited great strength on the tough journey ahead.
Through run-away slave notices, we are able to form individual and group portraits of 18th and early 19th century African American slaves. We find restless, tired young men and women fed up with the psychological and physical maltreatment they were forced to endure as enslaved people. |
A basic understanding of evolution lets us know that we are all descended, with modifications, from a common ancestor. If we trace our lineage back far enough we will find our kinship with fish over 400 million years ago (mya). Moving forward in time from our formerly fishy selves, we find amphibian relatives (~350mya) and reptilian relatives (~300mya). The animals representing the base of the mammalian lineage start to show up ~200mya. While it is certainly interesting to consider our distant relationship to these other megagroups of organisms, what I find particularly captivating are the things in between.
It was in an intro geology course that I first saw a picture of a therapsid. These animals look like a cross between reptiles and mammals. As a biology major I understood that we, as mammals, are descended from a group of reptiles. However, it had somehow never quite struck me that there must have been at some time an abundance of animals between these two groups, organisms that wouldn’t quite meet the qualifications of modern mammal or a modern reptile, but would have some of the characteristics of each. Looking at an artist’ rendition of a therapsid, it was fascination at first sight, and I have been studying organisms along this large-scale transition ever since.
What is it that really differentiates mammals from reptiles? Some of the classic mammal-defining characteristics are mammary glands (from which we derive the name of the group), a four-chambered heart (note: these are also found in birds, though people rarely confuse birds and mammals), and hair. These appear fairly straightforward and are easy to identify in living organisms. However, if we go to the geologic record, we quickly encounter a problem—none of these characteristics fossilize well (if at all). The predominant fossilized structures for vertebrates are bones and teeth. Surely there must be many skeletal differences between reptiles and mammals, right? As it turns out, there are not nearly as many as one might think.
In fact, the best indicator to differentiate between these groups is to look at the three smallest bones in your body, found in the middle ear. This may sound a bit crazy so brace yourself—the tiny bones with which we hear, are actually used by reptiles to chew their food. Yep, that’s right. We hear with our (ancient) jaw. A reptile’s jaw consists of multiple bones and there is but a single bone in the middle ear, the stapes. In contrast, the mammalian mandible is composed solely of a single bone, but a series of three bones comprise the middle ear: stapes, incus, and malleus.
Two of the bones in a reptile’s jaw (the articular and quadrate) are homologous with the malleus and incus in the mammalian middle ear. Homologous structures have the same origin and were inherited by a common ancestor, but have been adapted differently in different lineages. One might reasonably inquire, how can we possibly know that these are the same bones? Amazingly, this was first discovered in 1837 by a German anatomist, Karl Reichert. Reichert came to this conclusion by dissecting pig embryos and finding that at early stages of development, pigs had what seemed like extra bones in their jaws and skulls. By tracking their development, he found that by the time of birth, these elements moved to the middle ear.
Not surprisingly, Reichert’s announcement was met with significant skepticism. One of his biggest obstacles to acceptance was the lack of fossil verification. However, since that time, there have been a plethora of fossil discoveries corroborating his theory. The evolution of the mammalian middle ear is now one of the most well documented transitions in the fossil record, and represents an excellent example of exaptation—a repurposing of previously existing elements for an entirely new function.
While we now possess a wonderful record of transitional fossils, a significant difficulty still remains. Each fossil only provides a single snapshot in time, one still-frame view of where the bones stood in their evolutionary pathway. They may document and indicate the order of these changes, but how does such an amazing transition actually occur? What developmental processes must take place to fundamentally change the position and function of these elements from chewing to hearing? The only way to truly understand how this occurs would be to study the changes in living organisms. Placental mammals, like the pigs used by Reichert, are limited in what they can tell us because the changes occur so early in embryological development and only exhibit a partial reconstruction of the transition. But are there any mammals whose development traces the full jaw-to-ear transition? Turns out there are—marsupials.
I’ll tell you all about how modern marsupials are shedding light on ancient evolutionary events in Part 2.
About the author: Daniel Urban is a PhD student in the Department of Animal Biology at the University of Illinois at Urbana-Champaign. He studies development and embryology of modern mammals to better understand the changes observed in the fossil record. Stephanie Keep met Dan at ComSciCon and can tell you he’s an all around great guy. |
by TCRN Staff
Scientists from the Universidad Nacional (UNA) discovered six new species of ferns in Isla del Coco, a finding that joined the list of terrestrial species in this national park, located in the Pacific Ocean.
According to Alexander Rojas, the project was accomplished with support from the Ministry of Science and Technology, Ministry of Environment, Energy and Communications and National University of Costa Rica (UNA).
Ferns are plants that have no flowers or seeds and reproduce by spores (such as dark grains in the leaves). They proliferate in moist places along the ground or hanging from some trees. They serve as food for insects and forage for some animals and are vital for preventing soil erosion.
“In Cocos Island ferns constitute 25% of the vegetation and at least 30% of them are endemic species, hence it is so important for academic study,” said Alexander Rojas Alvarado, a biologist at the Universidad Nacional (UNA).
The scientists also detected three new records of ferns for this national park, two native species and an invasive-and noticed the absence of two species of ferns of the island themselves, one suspects, might have disappeared due to the proliferation of wild pigs digging in the ground. |
I have found it extremely useful to assist children to think of themselves as independent people. Quite obviously, pre-adolescent children are not independent in many respects, but creating an environment where they think of themselves as independent opens doors to many positive outcomes in the classroom. An independent young person is responsible for personal thoughts and actions, assignment planners, and homework. A vital and shared component of all these, and other, activities is the ability and responsibility to make decisions. Establishing oneself as a decision maker sets the stage for life as an independent person.
A very useful activity to undertake on the first day of school is to provide students with a blank sheet of paper approximately 18 inches long by 12 inches wide. On the board in front of the class write the word BORN with a period under it. Proceed to explain to the children that you want them to make a DECISION TREE outlining all the important decisions they have made in their lives to date. A decision tree will have branches for paths both taken and not taken. Paths not taken will end there, and paths taken lead to a continuation down the decision tree of life to the present day. It is imperative not to give too many ideas to the students at this point as to what specifically constitutes an important decisions. Of necessity for the purposes of this exercise, students should be thinking carefully through what decisions they think they have made over the course of their lives to date. Leaving it very open-ended without much guidance tends to produce varied and interesting results. Such results might include what school they ‘chose’ to go to from amongst alternatives they have heard of, what instruments or after-school activities they chose to do, or possibly where they have lived if they have moved apartments or cities. Whether they themselves actually made these decisions is immaterial. What is important is for them to think about making decisions, and to have some defined course they followed over their young lives. They should see that a large part of our human experience is essentially selecting from amongst a series of options. Once you establish yourself as a decision maker, then you are an independent person with regard to thought and responsibility for actions.
At times, one could give a few hints to those individuals who are struggling with the project in order to get them into a productive mode of thinking through this exercise. Once all students have finished this project, have students share their thoughts and ideas with each other. Studying the Robert Frost poem, The Road Not Taken at this juncture would be a very useful exercise.
After completing and discussing the poem, ask the students to turn over the sheet of paper upon which they have outlined their major life decisions to date. Again go the classroom board and write the word FUTURE on it with a period under it. Explain again the concept of a DECISION TREE to the students. This time, however, ask them to put together a set of possible scenarios of major decisions they think they will make for the remainder of their lives. Leave this discussion as vague as possible in order to foster creativity. Interesting outcomes here have been college, marriage, career(s), vehicles, sports, number of children, retirement, travel and in some instances even death. Here again, we are establishing the concept of independence through the ability to make decisions. |
- 1 General history of British nobility
- 2 Nobility: Peers and non-peers
- 3 Gentry styles and titles
- 4 Irish and Gaelic nobility
- 5 Gallery
- 6 See also
- 7 References
General history of British nobility
The nobility of the four constituent home nations of the United Kingdom has played a major role in shaping the history of the country, although in the present day even hereditary peers have no special rights, privileges or responsibilities, except for residual rights to stand for election to the House of Lords, dining rights in the House of Lords, position in the formal order of precedence, and the right to certain titles (see below).
In everyday speech, the British nobility consists of peers and their families, however in a more strict legal sense it includes both the titled and the untitled nobility. Members of the peerage carry the titles of Duke, Marquess, Earl, Viscount and Baron. Peers ranked from Baron up to Marquess are frequently referred to generically as Lords. However, the Scottish Baron, an official title of nobility in the United Kingdom, is addressed as The Baron of X. The untitled nobility consists of all those who bear formally matriculated armorial bearings. Other than their designation, such as Gentleman or Esquire, they enjoy no privilege other than a position in the formal orders of precedence in the United Kingdom. The largest portion of the British aristocracy have historically been the landed gentry, made up of baronets and the non-titled armigerous landowners (whose families hailed from the mediaeval feudal class and are referred to as gentlemen because historically they didn't need to work and derived their income from land ownership).
Scottish lairds' names include a description of their lands in the form of a territorial designation. In Scotland, a territorial designation implies the rank of "Esquire", thus this is not normally added after the name; Lairds are part of Scotland's landed gentry and - where armigerous - minor nobility.
The Peerage is a term used both collectively to refer to the entire body of peerage titles, and individually to refer to a specific title. All modern British honours, including peerage dignities, are created directly by the British monarch, taking effect when letters patent are affixed with the Great Seal of the Realm. The Sovereign is considered the fount of honour, and as "the fountain and source of all dignities cannot hold a dignity from himself", cannot hold a peerage.
Before the twentieth century, peerages were generally hereditary and (with a few exceptions), descended in the male line. The eldest son of a Duke, Marquess or Earl almost always uses one of his father's subsidiary titles as a courtesy title. For example, the elder son of the Earl of Snowdon is called Viscount Linley.
The modern peerage system is a vestige of the custom of English kings in the 12th and 13th centuries in summoning wealthy individuals (along with church officials and elected representatives for commoners) to form a Parliament. The economic system at the time was manorialism (or feudalism), and the burden or privilege of being summoned to Parliament was related to the amount of land one controlled (a "barony"). In the late 14th century, this right (or "title") began to be granted by decree, and titles also became inherited with the rest of an estate under the system of primogeniture. Non-hereditary positions began to be created again in 1867 for Law Lords, and 1958 generally.
In 1958 the Life Peerages Act introduced (non-hereditary) life peers able to sit in the House of Lords, and from then on the creation of hereditary peerages rapidly became obsolete, almost ceasing after 1964. This, however, is only a convention and was not observed by former prime minister Margaret Thatcher who asked the Queen to create three hereditary peerages (two of them, however, to men who had no heirs). Until changes in the twentieth century, only a proportion of those holding Scottish and Irish peerages were entitled by that title to sit in the House of Lords; these were nominated by their peers.
Until 1999 possession of a title in the peerage (except Irish) entitled its holder to a seat in the House of Lords, once of age. Since 1999 only 92 hereditary peers are entitled to sit in the House of Lords, of which 90 are elected by the hereditary peers by ballot and replaced on death. The holder of the position of Earl Marshal, which is a royal-office position responsible for ceremony and certain great state occasions, automatically sits in the House. The current holder is the Duke of Norfolk. The holder of the position of Lord Great Chamberlain also sits automatically in the House. The position is held in gross and one of a number of persons will hold it. The Lord Great Chamberlain is Her Majesty's representative in Parliament and accompanies Her Majesty on certain state occasions.
A member of the House of Lords cannot be a member of the House of Commons. In 1960, Anthony (Tony) Wedgwood Benn, MP inherited his father's title as Viscount Stansgate. He fought and won the ensuing by-election, but was disqualified from taking his seat until an act was passed enabling hereditary peers to renounce their titles. Titles, while often considered central to the upper class, are not always strictly so. Both Captain Mark Phillips and Vice Admiral Timothy Laurence, the respective first and second husbands of HRH The Princess Anne do not hold peerages. Most members of the British upper class are untitled.
Nobility: Peers and non-peers
- Dukes in Britain
- List of dukes in the peerages of the British Isles
- List of dukedoms in the peerages of the British Isles
- List of Marquesses in the peerages of the British Isles
- List of Marquessates in the peerages of the British Isles
- List of Earls in the peerages of the British Isles
- List of Earldoms in the peerages of the British Isles
- List of Viscounts in the peerages of the British Isles
- List of Viscountcies in the peerages of the British Isles
Barons / Lords of Parliament of Scotland
Gentry styles and titles
Baronets (styled as Sir)
Knights (styled as Sir)
- Knight, from Old English cniht ("boy" or "servant"), a cognate of the German word Knecht ("labourer" or "servant").
- British honours system
Untitled members of the Gentry
- Esquire (ultimately from Latin scutarius, in the sense of shield bearer, via Old French esquier)
Irish and Gaelic nobility
Outside the United Kingdom, the remaining Gaelic nobility of Ireland continue informally to use their archaic provincial titles. As Ireland was nominally under the overlordship of the English Crown for between the 12th and 16th centuries, the Gaelic system coexisted with the British system. A modern survivor of this coexistence is the Baron Inchiquin, still referred to in Ireland as the Prince of Thomond. The Prince of Thomond is one of three remaining claimants to the non-existent, since the 12th century, so-called High Kingship of Ireland, the others being The O'Neill, and the O'Conor Don.
Chief of the Name was a clan designation which was effectively terminated in 1601 with the collapse of the Gaelic order, and which, through the policy of surrender and regrant, eliminated the role of a chief in a clan or sept structure. Contemporary individuals today designated or claiming a title of an Irish chief treat their title as hereditary, whereas chiefs in the Gaelic order were nominated and elected by a vote of their kinsmen. Modern "chiefs" of tribal septs descend from provincial and regional kings with pedigrees beginning in Late Antiquity, whereas Scottish chiefly lines arose well after the formation of the Kingdom of Scotland, (with the exception of the Clann Somhairle, or Clan Donald and Clan MacDougall, the two of royal origins). The related Irish Mór ("Great") is sometimes used by the dominant branches of the larger Irish dynasties to declare their status as the leading princes of the blood, e.g. Ó Néill Mór, lit. (The) Great O'Neill.
Following the Norman invasion of Ireland several Hiberno-Norman families adopted Gaelic customs, the most prominent being the De Burgh dynasty and FitzGerald dynasty; their use of Galic customs did not extend to their titles of nobility, as they continuously utilized titles granted under English monarchy authority.
- Peerage, an exposition of great detail
- Peerage of England
- Peerage of Scotland
- Welsh peers and baronets
- Peerage of Ireland
- Landed gentry
- Forms of address in the United Kingdom
- British honours system
- "Debrett's Forms of Address (Lairds)". Retrieved 2010-07-18.
- Adam, F. & Innes of Learney, T. (1952). The Clans, Septs, and Regiments of the Scottish Highlands (4th ed.). Edinburgh & London: W. & A.K. Johnston Limited. p. 410.
- Opinion of the House of Lords in the Buckhurst Peerage Case
- Ruling of the Court of the Lord Lyon (26/2/1948, Vol. IV, page 26): "With regard to the words 'untitled nobility' employed in certain recent birthbrieves in relation to the (Minor) Baronage of Scotland, Finds and Declares that the (Minor) Barons of Scotland are, and have been both in this nobiliary Court and in the Court of Session recognised as a 'titled nobility' and that the estait of the Baronage (i.e. Barones Minores) are of the ancient Feudal Nobility of Scotland".
- "Knight". Online Etymology Dictionary. Retrieved 2009-04-07.
- "Knecht". LEO German-English dictionary. Retrieved 2009-04-07. |
- The device is coupled to a host computer that displays a cursor in a graphical environment, such as a GUI, on a display screen.
- This allows the user to move the cursor to the edge of the screen and as a result, the camera will move in the same direction.
- Make the input cursor hop to the next field after a user finishes the current field.
- The twocycle log scale was necessary because the rules had no cursor.
- Like other Routledge-type slide rules produced by Stanley, the Hogg rule had no cursor.
- The first step was to position a cursor at the entry of the pipette.
Nowadays we call the movable indicator on our computer screen the cursor. In medieval English a cursor was a running messenger: it is a borrowing of the Latin word for ‘a runner’, and comes from currere ‘to run’. From the late 16th century cursor became the term for a sliding part of a slide rule or other instrument, marked with a line for pinpointing the position on a scale that you want, the forerunner of the computing sense. Currere is the source of very many English words including course (Middle English) something you run along; concourse (Late Middle English) originally a crowd who had ‘run together’; current (Middle English) originally meaning ‘running, flowing’; discursive (late 16th century) running away from the point; excursion (late 16th century) running out to see things; intercourse (Late Middle English) originally an exchange running between people; and precursor (Late Middle English) one who goes before; as well as supplying the cur part of concur (Late Middle English); incur (Late Middle English); occur (Late Middle English) (from ob- ‘against’); and recur (Middle English).
Words that rhyme with cursorbursar, converser, curser, disburser, mercer, purser, rehearser, reverser, vice versa
For editors and proofreaders
Line breaks: cur¦sor
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. |
Three methods of calculating GDP:
Gross Domestic Product (GDP) measures the total value of all goods and services produced within an economy. It is used as a macroeconomic measure of the total income of a country.
There are three different methods (Expenditure, Income and Production) which can be used to measure the GDP of a country. All of these methods in theory should sum to the same amount.
1. Expenditure method
The expenditure approach is where you add up all the various types of spending which occurs within an economy. There are 4 different types.
Consumption is all the spending that households do on goods and services. For example, the amount of apples a household purchases; the amount of money spent on healthcare; the amount of money spent purchasing new cars and the money spent on pizza are all examples of consumption spending.
Investment is the spending that firms do machinery and equipment to operate their businesses. Examples of investment spending would be a mining company purchases a truck to transport coal; It companies purchasing new computers and the purchase of a new plane for an airline company.
Government Spending (G)
Government spending is the spending that the government conducts within an economy. Examples of government spending include spending on defense; spending on health care; building of roads and education spending.
Net Exports (NX)
Net exports is defined at the purchases of domestically produced goods by foreigners subtracted from the purchases of internationally produced goods by local residents. In essence, it is the value of what is sent overseas minus the value of stuff that comes here.
If an airline company operating in USA purchases a new plane from France, this would be considered an import for USA and an export for France. This would cause the net exports to decrease for USA whilst causing the net exports to increase for France.
An interesting case is where a foreign student from China comes and studies at a school in the USA. This is considered an export from USA to China since the USA is producing a service (education) which is essentially being "sent" to a Chinese student who is from the chinese economy. Thus, China is importing education from USA.
Therefore, if we add up these 4 components we get:
GDP = C + I + G +NX
This is also called the demand approach to calculating GDP since all these components are demands for goods and services. It is looking at the demand side of the economy.
For example, using the input-output tables for Australia you can calculate the GDP for Australia in the year 2018 with:
where GDP is measured in millions of dollars.
2. Income method
The income approach is when you add together all factor payments to calculate GDP. Factor payments are all the payments that go to inputs to produce output. Typically, the main factor payments are: profits, returns to labor and returns to capital. The formula for the income approach is as follows:
GDP = π + wl + rk
π = profits that firms make
wl = wage * total labour provided - this is the returns to labour.
rk = rental rate of capital * the amount of capital provided
3. Production method
The production method (or value added) is where we calculate the total value of all goods produced in the economy minus the value of intermediate goods.
Consider an economy which produces steel and cars. Suppose the economy produces 100 units of steel which it sells for $1 and it produces 10 cars, using 5 units of steel, which it sells for $100.
As the production of steel requires no intermediate inputs, the value added from the production of steel is $100.
The production of cars produces $1000 worth of cars using $50 of steel. Therefore, the value added is $950.
The total value added/GDP of the economy is thus $1050. Alternatively, we could have added the total amount spent on the cars $1000 and total spend on steel $100 giving $1100 and then subtracted the $50 of intermediate inputs to also get $1050.
How are all measures equivalent?
Consider the following example to illustrate how these all arrive at the same value. Suppose that the economy has 1 firm producing 1 type of good. The firms profit function would look like:
π = P*Q - wl - rk
where P*Q is the price times the quantity of output. Essentially profit equals the revenue earned from selling output minus how much they need to pay labor and capital. We can re-arrange this equation as such:
P*Q = π + wl + rk
As we can see, the left hand side just equals value of all goods produced in the economy. This is the value we would arrive at if we used the production approach. The right hand side equals all the income payments. In essence, all the revenue earned from producing a goods must be distributed as either profit or to the factors that produced it. And since all income is either saved, consumed or given to the tax in income payments, it is easy to see that:
π + wl + rk = C + I + G + NX
Further reading & references
This post has outlined the three different methods in which GDP can be calculated in a very simple manner. For a better understanding on how GDP is calculated or for a reference, please consult the UN website here.
The following is a useful textbook which outlines how to calculate GDP using each method and has problems:
Tempini Macdonald, N. (1999). Macroeconomics and business. London: International Thomson Business Press. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.