content
stringlengths 275
370k
|
---|
The latest news about space exploration and technologies,
astrophysics, cosmology, the universe...
Posted: Nov 06, 2014
Hubble surveys debris-strewn exoplanetary construction yards
(Nanowerk News) Astronomers using NASA's Hubble Space Telescope have completed the largest and most sensitive visible-light imaging survey of dusty debris disks around other stars. These dusty disks, likely created by collisions between leftover objects from planet formation, were imaged around stars as young as 10 million years old and as mature as more than 1 billion years old.
"It's like looking back in time to see the kinds of destructive events that once routinely happened in our solar system after the planets formed," said survey leader Glenn Schneider of the University of Arizona's Steward Observatory. The survey's results appeared in the Oct. 1, 2014, issue of The Astronomical Journal.
This is a set of images from a NASA Hubble Space Telescope survey of the architecture of debris systems around young stars. Ten previously discovered circumstellar debris systems, plus MP Mus (a mature protoplanetary disk of age comparable to the youngest of the debris disks), were studied. Hubble's sharp view uncovers an unexpected diversity and complexity in the structures. The disk-like structures are vast, many times larger than the planetary distribution in our solar system. Some disks are tilted edge-on to our view, others nearly face-on. Asymmetries and warping in the disks might be caused by the host star's passage though interstellar space. Alternatively, the disks may be affected by the action of unseen planets. In particular, the asymmetry in HD 181327 looks like a spray of material that is very distant from its host star. It might be the aftermath of a collision between two small bodies, suggesting that the unseen planetary system may be chaotic. The stars surveyed may be as young as 10 million years old and as mature as more than 1 billion years old. The visible-light survey was done with the Space Telescope Imaging Spectrograph (STIS). The STIS coronagraph blocks out the light from the host star so that the very faint reflected light from the dust structures can be seen. The images have been artificially colored to enhance detail. (click on image to enlarge)
Once thought to be simply pancake-like structures, the unexpected diversity and complexity and varying distribution of dust among these debris systems strongly suggest these disks are gravitationally affected by unseen planets orbiting the star. Alternatively, these effects could result from the stars' passing through interstellar space.
The researchers discovered that no two "disks" of material surrounding stars look the same. "We find that the systems are not simply flat with uniform surfaces," Schneider said. "These are actually pretty complicated three-dimensional debris systems, often with embedded smaller structures. Some of the substructures could be signposts of unseen planets." The astronomers used Hubble's Space Telescope Imaging Spectrograph to study 10 previously discovered circumstellar debris systems, plus comparatively, MP Mus, a mature protoplanetary disk of age comparable to the youngest of the debris disks.
Irregularities observed in one ring-like system in particular, around a star called HD 181327, resemble the ejection of a huge spray of debris into the outer part of the system from the recent collision of two bodies.
"This spray of material is fairly distant from its host star — roughly twice the distance that Pluto is from the Sun," said co-investigator Christopher Stark of NASA’s Goddard Space Flight Center, Greenbelt, Maryland. "Catastrophically destroying an object that massive at such a large distance is difficult to explain, and it should be very rare. If we are in fact seeing the recent aftermath of a massive collision, the unseen planetary system may be quite chaotic."
Another interpretation for the irregularities is that the disk has been mysteriously warped by the star's passage through interstellar space, directly interacting with unseen interstellar material. "Either way, the answer is exciting," Schneider said. "Our team is currently analyzing follow-up observations that will help reveal the true cause of the irregularity."
Over the past few years astronomers have found an incredible diversity in the architecture of exoplanetary systems — planets are arranged in orbits that are markedly different than found in our solar system. "We are now seeing a similar diversity in the architecture of accompanying debris systems,” Schneider said. "How are the planets affecting the disks, and how are the disks affecting the planets? There is some sort of interdependence between a planet and the accompanying debris that might affect the evolution of these exoplanetary debris systems."
From this small sample, the most important message to take away is one of diversity, Schneider said. He added that astronomers really need to understand the internal and external influences on these systems, such as stellar winds and interactions with clouds of interstellar material, and how they are influenced by the mass and age of the parent star, and the abundance of heavier elements needed to build planets.
Though astronomers have found nearly 4,000 exoplanet candidates since 1995, mostly by indirect detection methods, only about two dozen light-scattering, circumstellar debris systems have been imaged over that same time period. That's because the disks are typically 100,000 times fainter than, and often very close to, their bright parent stars. The majority have been seen because of Hubble's ability to perform high-contrast imaging, in which the overwhelming light from the star is blocked to reveal the faint disk that surrounds the star.
The new imaging survey also yields insight into how our solar system formed and evolved 4.6 billion years ago. In particular, the suspected planet collision seen in the disk around HD 181327 may be similar to how the Earth-Moon system formed, as well as the Pluto-Charon system over 4 billion years ago. In those cases, collisions between planet-sized bodies cast debris that then coalesced into a companion moon. |
When considering the 11+, or any other exam, it is no surprise that having a wide vocabulary is hugely beneficial. In fact, it is not just an advantage for English exams. Written problems in maths, science questions, responding to interview questions.
All of these assessment tasks will be better tackled with the armoury of a broad and sophisticated vocabulary. But what is the best way to build this tool in young learners? Vicky Duggan, Head of English and Head of Marketing at The Granville School, shares her top tips.
It won’t shock you that the first thing you can do is read. Read anything and everything you can get your hands on. This doesn’t mean asking your child to sit and read alone for hours each evening. Read together. Read the newspaper. Read the information panels at museums. Read travel guides for your next holiday. Read poems. Act out playscripts as a family. Visit the library or your local independent bookshop and browse for new books that your child might not usually choose.
By always reading the same genre, a child’s vocabulary will naturally be limited. Branch out into science fiction, historical novels, non-fiction, biography.
The next thing you can do is to model advanced vocabulary with your child. Don’t speak down to them. Talk to them as an adult and when they question what something means, explain it to them. Don’t shy away from discussing big issues that require your child to articulate themselves. Sit around the dinner table together and talk about what is happening around the world.
If your child finds it difficult to find the right word to describe something, help them. Offer alternatives and explain the nuance of their meaning. And when they use a simple word that’s not quite got the correct nuance, talk to them about how else they could phrase it. Feelings are a good example: sadness is different to disappointment, regret, pity.
Finally, and perhaps most importantly, make words fun. Play word games and embrace the discussions about whether words exist for Scrabble, how you spell things for Boggle, what synonyms you can use for Taboo.
Encourage investigating words and language. Ask them to test you on your vocabulary using a dictionary. They will learn something by finding the words for you, and you might learn something too. Show interest in the new words you find together and show your child that nobody knows every word – we are all still learning.
If your child is a visual learner, they may find drawing words fun and effective. How can you draw ‘discombobulated’? What does ‘captivating’ look like? Words and pictures together can be very powerful.
It is important to note here that learning lists of words and their definitions is not an effective way to build vocabulary. Words need context, they need sentences and stories around them to make them useful to us. We need to hear and use the words repeatedly for them to embed. Rote learning is not the answer here.
We can’t expect children to use (or understand) more advanced vocabulary if we don’t teach it to them in the first place. They can’t conjure it from nowhere. They need us to set an example, to offer them new language to question, use and absorb. Sitting them alone in a room with a book isn’t enough. Yes, reading will help but not if children aren’t motivated to ask what new words mean, to try and work it out from the context or to look them up in the dictionary. We need a culture of interest around language and then our children will become word lovers for the rest of their lives.
Vicky Duggan is Head of English and Head of Marketing at The Granville School. She is also the author of Galore Park’s KS2 English textbooks, their 11+ English Revision Guide, 11+ English Practice Papers (including papers for the CEM and GL exams), and their 13+ Writing and Vocabulary workbooks. Learn more about Galore Park’s English resources (including their skills-based workbooks) here. |
Did you know that the United States has a higher rate of infant mortality than Japan (CIA, n.d.)? Or, as Dr. Beilenson states in this week’s media presentation, that “your zip code that you live in makes more difference in your health and well-being than the genetic code that you’re born with?” What causes these differences in health outcomes?
To effectively develop policies and programs to improve population health, it is useful to use a framework to guide the process. Different organizations and governmental agencies (for example, Healthy People 2020) have created a variety of such frameworks, which establish measures for assessing population health. These measures frequently are derived from the examination of epidemiologic data, which include key measures of population health such as mortality, morbidity, life expectancy, etc. Within each measure are a variety of progress indicators that use epidemiologic data to assess improvement or change.
For this Discussion, you will apply a framework developed by Kindig, Asada, and Booske (2008) to a population health issue of interest to you. This framework includes five key health determinants that should be considered when developing policies and programs to improve population health: access to health care, individual behavior, social environment, physical environment, and genetics.
Review the article “A Population Health Framework for Setting National and State Health Goals,” focusing on population health determinants.
Review the information in the blog post “What Is Population Health?”
With this information in mind, elect a population health issue that is of interest to you.
Using this week’s Learning Resources, the Walden Library, and other relevant resources, conduct a search to locate current data on your population health issue.
Consider how epidemiologic data has been used to design population health measures and policy initiatives in addressing this issue.
Post a summary of how the five population health determinants (access to health care, individual behavior, social environment, physical environment, and genetics) affect your selected health issue, and which determinants you think are most impactful for that particular issue and why. Explain how epidemiologic data supports the significance of your issue, and explain how this data has been used in designing population health measures and policy initiatives. |
The period from April–July 2018 was the hottest and driest on record in Germany having a severe impact on the vegetation. This is especially apparent in the grass vegetation as these plants are not able to store a lot water and their roots only penetrate the top most soil layers. On the image on the right (Aug. 2018) you can see multiple grassy areas turned brown in comparison with an image from the same time of the previous year on the left. This is apparent along the river, in the Rosental Park east of the main station or in the grassy areas north of Grünau (on the left of the image).
While the appearance of the lawns drastically changes, trees where less impacted immediately as their roots penetrate deeper layers of the soil. Nevertheless in the long run also these vegetation types will be damaged by longer and more frequent periods of drought. Since lawns and open spaces in general provide a wide range of benefits to the cities residents, and the direct impact of a heatwave is the severest here, more attention and research needs to be directed towards this urban structure. Please, move the slider in the image below to the right and left to find more about it yourself.
The wider picture
Heatwaves have fundamental consequences to the urban environment and their residents. In comparison with the year 2017. This led to losses of ecosystem service like the provision of cooling from green surfaces, profoundly reducing the peoples well-being. Therefore increased management and monitoring capacities in cities are needed, as prolonging and profound heat waves will be a major public health hazard in the 21st century. What is needed beyond are better adapted plants which endure longer periods of stress, e.g. trampling or drought.
Our scientific contributions
About the Author
Thilo Wellmann holds a M.Sc degree in Global Change Geography and does a doctorate in Landscape Ecology at the Humboldt-Universität zu Berlin. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
In astronomy, the vernal equinox (spring equinox, march equinox, or northward equinox) is the equinox at the beginning of spring in the northern hemisphere: the moment when the sun appears to cross the celestial equator, heading northward. The equinox occurs around March 20-22, varying slightly each year according to the 400 year cycle of leap years in the Gregorian Calendar. At the present time, the vernal equinox occurs as the sun moves through the constellation Pisces. 2000 years ago the equinox was in Aries and by 2600 it will be in Aquarius.
In the southern hemisphere, the equinox occurs at the same moment, but at the beginning of autumn. There are two conventions for dealing with this: either the name of the equinox can be changed to the autumnal equinox, or (apparently more commonly) the name is unchanged and it is accepted that it is out of sync with the season. The alternative terms March equinox or northward equinox avoid any such ambiguity.
At the equinox, the sun rises directly in the east and sets directly in the west. In the northern hemisphere, before the vernal equinox, the sun rises and sets more and more to the south, and afterwards, it rises and sets more and more to the north.
This is when the Neopagan Sabbat of Ostara (or Eostar) is celebrated. Also, Vernal Equinox Day (春分の日) is an official national holiday in Japan, and is spent visiting family graves, and holding family reunions. |
By Michael Draper —Physics Teacher
The starting point for all Montessori education activity is understanding the developmental drives and needs of the child, at that time (because they change as the child grows).
Maria Montessori identified adolescence (approximately 12 to 18 years), with its own distinct set of developmental drives and needs, as the 3rd plane of human development. Having developed basic cultural and physical competence (1st plane), and knowledge and frameworks for understanding their physical and human worlds (2nd plane), the adolescent begins the process of moving from dependence on the family to becoming an independent contributing adult member of society. (Maria Montessori, The Absorbent Mind, p.18).
To operate outside the safety (or in the adolescent perspective, confines) of their family, the adolescent must learn:
- to express themselves, both personally and as a member of society
- the elements of supporting themselves materially
- to participate in and share the benefits of collective work
- to discharge adult responsibilities and manage adult consequences
- to support themselves and others emotionally
- the moral and ethical approaches needed to function successfully in society
The 3rd plane echoes the 1st in that much of the essential learning and development occurs experientially. Where the baby starts as a physical new-born, the adolescent starts as a social new-born, and must experiment with and master patterns of behaviour, attitudes, and communication they will use as a member of wider society. As with any experimentation, there will be errors and failures along the way, but even these help the adolescent develop adult responses to the complexities of life. (MM, The Adolescent – a “Social Newborn” p73-78)
The physical and neurological changes that occur during adolescence present additional challenges. They experience clumsiness adapting to their changing body. Rapid reproductive and biochemical changes alter and intensify their feelings. Neurological changes impact their brain function. These unpredictable lapses in physical, emotional, and intellectual capability occur as they strive to develop their personal confidence. They need acceptance and patient support as they go through these changes.
The adolescent also needs to study and practise the manual and intellectual skills they will need to earn a living, function in modern society and adapt to our changing world. “But they must not be forced to study every minute, for this is a form of torture that causes mental illness. The human personality must be given a chance to realize every one of its capabilities” (MM, Education and Peace, p.110).
Adolescence is the sensitive period for the development outlined above. “When he enters the workaday world, man must be aware first and foremost of his social responsibility… It is therefore necessary to prepare men to be aware of it and to fulfil it.” (MM, Education and Peace, p.110). This is the heart of Montessori adolescent education. |
A spirometer is a medical device often used to assess respiratory function and diagnose respiratory diseases, including asthma, chronic obstructive pulmonary disease, and asbestosis.1 To use the device, you inhale and exhale as deeply as possible into a breathing tube attached to the spirometer itself, which measures your forced vital capacity (the maximum amount of air you can breathe in) and forced expiratory volume (how much you can breathe out in 1 second).1
Figure 1: This shows the basic set-up of a modern spirometer test. The patient wears a nose clip and breathes into a mouthpiece, and a monitor displays a graph of their inhaled and exhaled volume. (Source Wikipedia)
For each patient who is tested using a spirometer, the operator must enter information about the patient, including their age, sex, height, and race.2 Unbeknownst to many operators, selecting a patient’s race enables a “race correction” setting programmed directly into the spirometer software – typically a 10-15% lower baseline lung capacity for patients identified as Black, and 4-6% lower for patients identified as Asian.3 Despite conflicting studies contesting the validity of using racial correction factors,4 it continues to be taught in modern science. This idea that non-whites have intrinsically lower lung capacity began as a justification for slavery, and the ramifications of this notion have continued to manifest in modern-day medical devices.
In Thomas Jefferson’s “Notes on the State of Virginia,” the former president and slaveholder described deficiencies in “the pulmonary apparatus” of Black slaves.5 Plantation physician Samuel Cartwright further elaborated on Jefferson’s sentiments with his own spirometer studies, reporting a 20% “deficiency in the negro” in regards to lung capacity.6 Cartwright promoted slavery on the grounds that forced labor was necessary for Black people’s health due to their innately lower lung capacity. He stated that “it is the red vital blood sent to the brain that liberates their mind when under the white man’s control.” 6
After the Civil War, Benjamin Apthorp Gould further expanded on Cartwright’s work by comparing the lung capacity of Black and White soldiers. Although Gould did not account for height, age, or the living conditions of recently emancipated slaves, Gould’s conclusions mirrored those who came before him: “full blacks” had lower lung capacity than “whites.” 2,5,7 Gould’s study is still cited in scientific articles today.2
Over the course of the 20th century, researchers continued to fuel the idea of innate racial differences in lung function, repeatedly failing to account for the influence of socioeconomic conditions. In a review of articles published between 1922-2008 comparing lung function between races, 94% of articles did not examine race in the context of socioeconomic status.8 Although it is often ignored in research articles, lower lung capacity has been associated with poverty in past studies, as well as other social determinants including environmental toxin exposure and healthcare inaccessibility.2,5
In 1984, J.E. Myers published an article that questioned the body of data supporting innate racial differences in lung function. Myers conducted his own spirometer studies of Black workers in South Africa, and his calculations showed that the published South African standards considerably underestimated the lung volume of Black people.4 Myers also challenged the assumptions made in previous studies, pointing out that they neglected to account for socioeconomic factors including environmental pollutants, housing quality and nutrition quality.4
Several years after Myers’ article, in 1999, asbestos manufacturer Owens Corning used the argument that Black people have an intrinsically lower lung capacity to evade lawsuits from Black workers with lung damage. The company tried to argue that Black workers should be held to a different standard when assessing asbestos-induced lung damage because Black people consistently score lower on pulmonary function tests.9 The motion was overruled, but the case highlighted how historic assumptions on race have infiltrated modern lung research. As in the case of Owens Corning, modified lung function standards based on race have the potential to reduce diagnosis rates for respiratory illnesses and lung damage.
Current spirometers implement “race correction” automatically, defining race as a purely genetic difference, rather than exploring the environmental and socioeconomic factors that have been shown to influence lung function. Lundy Brahn, a Brown University professor of Africana studies and medical science, addresses these issues in her article “Race, ethnicity and lung function: A brief history,” where she provides insights on how to address lung function research in the future.
“Research and clinical practice needs to devote more careful attention to the social nature of racial and ethnic categories and draw on more complex explanatory frameworks that incorporate disproportionate exposures to toxic environments, differential access to high-quality care and the daily insults of racism in every sphere of life that manifest biologically.” 2
– Lundy Braun, PhD
1Spirometry: Mayo Clinic
2Race, ethnicity and lung function: A brief history, by Lundy Brahn
3 Breathing race into the machine: The surprising career of the spirometer from plantation to genetics, by Lundy Braun
4Different ethnic standards for lung functions, or one standard for all?, by J.E. Myers
5Science reflects history as society influences science: brief history of “race,” “race correction,” and the spirometer, by Heidi L. Lujan and Stephen E. DiCarlo
6 The Science and Politics of Racial Research, by William H. Tucker
7Investigations in the Military and Anthropological Statistics of American Soldiers, by Benjamin Apthorp Gould
8Defining race/ethnicity and explaining difference in research studies on lung function, by Lundy Braun
9Racial basis for asbestos lawsuits?; Owens Corning seeks more stringent standards for blacks, by Erin Texeira |
THE EFFECT OF A PROBLEM-BASED LEARNING APPROACH ON STUDENT ACADEMIC ACHIEVEMENT IN PRIVATE SECONDARY SCHOOLS IN RIVERS STATE
1.1 Background of the research
When supported by the instructor, communication in the classroom may be a crucial component of every student’s learning, demanding the need to cultivate in students the ability to discuss, evaluate, and analyze their own and other students’ work. This not only helps them learn faster, but it also encourages critical thinking and cooperative learning (Kanl, & Emir 2013).
In the twenty-first century, several developed and developing countries are questioning their traditional educational philosophies and programs for the education of thinking, problem-solving, evaluator, decision-maker, responsible, creative, up-to-date individuals who fit this age of information and technology, where the lecturer is the transmitter and the student is the receiver. Education programs, lecture materials, education strategies, and statistically determined education output are all studied in terms of cultural reproduction and social order maintenance in the traditional sense (Tanner and Tanner, 2007).
Notably, traditional education has been chastised for ignoring student life, limiting student participation, and defining a student’s task based on school books and course outline (Dewey, 1997), necessitating the need for progressive education, which necessitates individuals who solve problems, argue, question, change, and lead rather than accumulating information.
This criterion emphasizes the significance of the problem-based learning (PBL) method, which allows students to work in groups on a topic-specific scenario. The basic purpose of an educational program, according to Gagne (1959), is to teach students how to handle problems in both their academic disciplines and their personal life.
This is important because problem-solving abilities let an individual actively adapt to their circumstances; it is also required for people to become inquisitive and problem-solving individuals (Marzano, 1989). As a result, those with these credentials should be able to think more critically. A person’s thinking is guided by problem resolution (Kalayc, 2001).
Problem solving, according to Gagne (1985), stimulates the most complex cognitive processes and enables for the simultaneous application of multiple critical abilities such as learning by doing, developing cause-and-effect links, and analyzing the relationships between concepts and occurrences. As Dewey foresaw, progressive strategies such as problem-based learning (PBL) have become critical at this juncture because they stimulate students to conduct study, discover, and apply their creativity (Delisle, 1997).
The modern need for problem solving, discussion, questioning, changing, and leading individuals who apply information rather than collecting it has highlighted PBL’s significance. PBL significantly improves individuals’ abilities in all of these areas (Tatar and Oktay, 2011; Peterson and Treagust, 1998). Several studies (Klnç, 2007; Harland, 2003; Mayer, 2002) suggest that PBL has an impact on students’ development of these skills.
1.2 Definition of the problem
As classroom instructions should be produced in ways that empower learners with problem-solving skills, it is clear that the teaching and learning process has become more varied and engaging, with the possibility for greater personal involvement from students. While Nigerian schools have a long history, the educational environment has shifted from teacher-centered to learner-centered, necessitating the adoption of a problem-based learning style.
Problem-Based Education directs students to conduct research, learn, discuss, select the best option among many solutions, and apply what they have learned in real-life scenarios; in short, it is an approach that teaches students research, teamwork, and observation from multiple perspectives in real-life scenarios (Deveci, 2002; Kaptan and Korkmaz, 2002).
According to PBL, learning takes place as a result of cognitive and social interaction in a problem-oriented media. According to this premise, PBL is defined as a constructivist educational paradigm that involves the teaching of general principles that can be applied to analogous situations as well as information that may be used to future difficulties (Norman and Schmidt, 2000; Greeno, Collins and Resnick, 1996).
Independent studies, on the other hand, usually concentrate on a specific application or procedure in order to determine the impact of PBL on achievement when compared to traditional education. However, few studies have looked at the influence of PBL on academic attainment in non-private secondary schools.
Because there was a current study in this field that needed to be done, the researcher felt inspired to investigate this topic. Based on this premise, the study focused on the Problem Based Learning technique and its impact on student academic progress in Rivers State private secondary schools.
1.3 Purpose of the research
The overarching goal of this research is to investigate the Problem Based Learning technique and its impact on student academic progress in private secondary schools in Rivers State. Specifically, the research
To find out if the Problem Based Learning approach improves students’ critical thinking skills.
To determine whether or not the Problem Based Learning strategy may increase students’ cognitive research skills and social interaction during classroom instruction.
To see if the Problem Based Learning approach can provide students with the problem-solving skills they will need in the future.
To identify some problems hindering teachers’ successful implementation of the Problem Based Learning approach in the classroom.
1.4 Hypothesis of Research
H01: The Problem Based Learning approach has no substantial impact on students’ academic achievement.
H02: The Problem Based Learning technique has little effect on improving students’ cognitive research skills or social interaction.
1.5 Importance of the research
This study investigates the impact of Problem Based Learning on student achievement in a private school. This remark is important for pupils since it will help some students appreciate and accept this discovery learning technique.
The study’s findings will be valuable to tutors since they will design better ways to use the Problem Based Learning approach, which results in successful learning. The study’s findings will also be used by academia and researchers as a reference material to aid them in investigating similar themes. Finally, the study will add experimentally to the corpus of knowledge and identify gaps for future research.
1.6 The scope of the research
This research focuses on the Problem Based Learning strategy and its impact on student academic progress in private secondary schools. The study will look into whether the Problem Based Learning approach improves students’ critical thinking skills. The study is to determine whether the Problem Based Learning technique may increase students’ cognitive research skills and social interaction during classroom instruction.
It will determine whether the Problem Based Learning approach can provide students with the problem-solving skills they will need in the future, as well as identify certain problems preventing instructors’ successful use of the Problem Based Learning approach during classroom instruction. The study is limited to selected private schools in the Rivers State metropolis of Port Harcourt.
1.7 The study’s limitations
The researchers encountered minor obstacles when conducting the study, as with any human endeavor. The significant constraint was the scarcity of literature on the subject due to the nature of the discourse, so the researcher incurred more financial expenses and spent more time sourcing for relevant materials, literature, or information and collecting data, which is why the researcher resorted to a limited sample size.
Furthermore, the researcher will do this investigation alongside other academic activities. Furthermore, the sample size was limited because only a few respondents were chosen to answer the study instrument, so the results cannot be generalized to other populations. Despite the constraints encountered during the research, all elements were minimized in order to provide the best results and make the research effective.
1.8 Term definitions:
Problem-Based Learning: Problem-based learning uses complicated, real-world challenges as the subject matter of the classroom, encouraging students to develop problem-solving abilities and acquire concepts rather than simply memorize facts.
Learning outcomes are statements that explain the knowledge or abilities that students should have at the end of a specific assignment, class, course, or subject.
Academic Achievement: Academic achievement refers to performance results that demonstrate how far a person has progressed toward specified goals that were the focus of activities in instructional settings, such as school, college, and university.
N. I. Abdullah, R. A. Tarmizi, and R. Abu (2010). The benefits of problem-based learning on mathematical performance and affective qualities in secondary school statistics learning. 8th Procedia Social and Behavioral Sciences, 370-376. https://www.sciencedirect.com/science/article/pii/S1877042810021579.
J. T. Ajai, B. I. Imoko, and E. I. O’kwu (2013). A comparison of the learning efficacy of problem-based learning (PBL) versus traditional algebra teaching methods. 4(1), pp. 131-136 in Journal of Education and Practice.
A. H. Dehkordi and M. S. Heydarmejad (2008). The effect of problem-based learning and lecturing on Iranian nursing students’ conduct and attitudes. 224-226 in Danish Medical Bulletin, 55(4).
Dewey, J. (1997). Education and experience. Macmillan, New York.
Kanl, E., and S. Emir (2013). The impact of problem-based learning on the success and creativity levels of talented and typical students. Electronic Journal of Science and Mathematics Education, 7(2), 18-45, Necatibey Faculty of Education
A. B. Masek (2012). The impact of problem-based learning on electrical engineering students’ knowledge acquisition, critical thinking, and intrinsic motivation (Unpublished doctoral thesis). Malaysian University Tun Hussein Onn
D. Tanner and L. Tanner (2007). Curriculum development: Putting theory into action (4.th ed.). Pearson Education, New Jersey. L. Tarhan and B. Acar (2007). “Factors Affecting Cell Potential” problem-based learning in an eleventh-grade chemistry class. Science and Technology Education Research, 25(3), 351-369.
Do You Have New or Fresh Topic? Send Us Your Topic |
It is very common for babies and toddlers to have ear infections. One study was said that five out of six children will experience an ear infection before their third birthday. Many children are concerned that an ear infection will affect their child’s hearing irreversibly or that the ear infection will go undetected and untreated. But, there is good news too. Most ear infections go away on their own and those that do not are typically easy to treat. Usually, your child will start to feel better in a period of few days after visiting the doctor.
If several days have passed and you still think that your baby is not okay, then visit your doctor as soon as possible. Maybe, your child will need other antibiotics. When the infection clears, the fluid may still remain in the middle ear, but it usually disappears in a period of 3 – 6 weeks. One of the best ways to prevent ear infections is to reduce the risk factors which are associated with them. You need to avoid smoking around your baby.
It is known that babies who are around smokers have more ear infections. Also, you need to wash your hands frequently, so this can help to prevent the spread of germs and it can help to keep your child from catching a cold or the flu. You should never put your baby down for a nap or for the night with a bottle. You should not allow your child to spend time with another sick child, so chances of getting ear infections will be reduced.
Childhood Ear Infections:
Ear infections happen when there is inflammation. It is usually caused by trapped bacteria. It happens in the middle ear, which is the part of the ear that connects to the back of the nose and throat. Otitis media is the most common type of ear infection. It happens when fluid builds up behind the eardrum and parts of the middle ear become infected and swollen.
If your child has an upper respiratory infection, cold, or a sore throat, then bacteria can spread to the middle ear through the Eustachian tubes (these are channels that connect the middle ear to the throat). In response to the infection, the fluid builds up behind the eardrum. Usually, children are more likely to suffer from ear infections compared to adults for two reasons:
- Eustachian tubes of children are smaller and more horizontal, which makes it more difficult for fluid to drain out of the ear
- The immune system of children is underdeveloped and it is less equipped to fight off infections
In some children, fluid can remain trapped in the middle ear for a long time or returns repeatedly, even when there is no infection.
Signs and Symptoms:
Pain in and around the ear is a telltale sign of an ear infection. Many young children develop an ear infection before they are old enough to talk. This means that parents are guessing why their children are not feeling well. But, there are some signs and symptoms which can tell you that your children have an ear infection, such as
- A loss of balance
- A fever, especially in younger children
- Difficulty hearing or responding to auditory cues
- Fluid draining from the ear
- Difficulty sleeping
- Crying and irritability
- Tugging or pulling the ear
Some signs that need immediate attention include bloody or pus-like discharge from the ears, severe pain, and high fever. If you want to prevent the ear infection from coming back, then you can limit some of the factors that might put your child at a risk, such as not being around people who smoke and not going out to bed with a bottle. But, in some cases, even parents take precautions, children may continue to have middle ear infections, sometimes as many as five or six a year. The doctor might want to see if the infection will get better on its own.
But, if the infection keeps coming back and antibiotics do not help, then your doctor may recommend a surgical procedure that places a small ventilation tube in the eardrum to improve the airflow and prevent the fluid backup in the middle ear. The most commonly used tubes stay in a place for 6 – 9 months and they need follow–up visits until they fall out. If the placement on the tubes still does not prevent the infections, then a doctor may consider removing the adenoids to prevent infection from spreading to the Eustachian tubes.
Usually, bacteria is a cause of an ear infection. It often begins after a child has a sore throat, cold, or other upper respiratory infection. If the upper infection is caused by bacteria, these same bacteria may spread to the middle ear. If the upper respiratory infection is caused by a virus, such as a cold, then bacteria may be drawn to the microbe–friendly environment and move into the middle ear as a cold, secondary infection. Due to the infection, the fluid builds up behind the eardrum.
We know that ears have 3 big parts – the outer ear, the middle ear, and the inner ear. The outer ear is known as the pinna. This part of the ear includes everything we see on the outside (the curved flap of the ear leading down to the earlobe), but it also includes the ear canal, which begins at the opening to the ear and it expands to the eardrum. As we know, the eardrum is a membrane that separates the outer ear from the middle ear. The location of our middle ear is between the eardrum and the inner ear. In the middle ear infections happen.
In the middle ear, there are three tiny bones which are known as malleus, incus, and stapes which transmit sound vibrations from the eardrum to the inner ear. The bones which are part of the middle ear are surrounded by air. The inner ear has a labyrinth, which is helping us to keep our balance. The cochlea is a part of the labyrinth. It is a snail-shaped organ, which converts sound vibrations from the middle ear into electrical signals. The auditory nerve is carrying these signals from the cochlea to the brain.
Also, other nearby parts of the ear can be involved in ear infections. The Eustachian tube is a small passageway that connects the upper part of the throat to the middle ear. The job of the Eustachian tube is to supply fresh air to the middle ear, drain fluid and keep the air pressure at a steady level between the nose and the ear. Adenoids are small pads of the tissue which are located behind the back of the nose, above the throat, and near the Eustachian tube. Mostly, adenoids are made up of immune system cells. They fight against infections by trapping bacteria that enter through the mouth.
Types of an Ear Infection:
It is known that there are three main types of ear infections. Each type of ear infection has a different combination of symptoms.
- AOM (acute otitis media): This is the most common type of ear infection in children. Part of the middle ear is infected and swollen. The fluid is trapped behind the eardrum. This leads to pain in the ear, which is known as an earache. Also, your children might get a fever.
- OME (otitis media with effusion): This condition sometimes happens after an ear infection that has run its course and fluid stays trapped behind the eardrum. Children who have this type of ear infection may have no symptoms. But, the doctor will be able to see the fluid behind the eardrum with a special instrument.
- COME (chronic otitis media with effusion): It happens when the fluid remains in the middle ear for a long time or returns over and over again, even though there is no infection. This type of ear infection makes it harder for children to fight new infections and also can affect their hearing.
Reasons why children have more chances of getting ear infections compared to adults:
There are different reasons why children have more chances to get ear infections compared to adults. They have smaller Eustachian tubes and there are more levels compared to the adults. This is making it more difficult for the fluid to drain out of the ear, even under normal conditions. If the Eustachian tubes are blocked with mucus due to a cold or other respiratory illness or if they are swollen, then fluid may not be able to drain.
The immune system of a child is not as effective as the immune system of an adult, because it is still developing. This is making it harder for children to fight against infections. The adenoids respond to bacteria passing through the nose and mouth as a part of the immune system. In some cases, the bacteria get trapped in the adenoids, which is causing a chronic infection that can be passed on the Eustachian tubes and the middle ear. |
Sensors are a product of the future. In the networked world, they determine temperature, pressure, light or acceleration. They help people find their way around, quickly summon the emergency services to an accident, or automatically close the skylights at home if it starts to rain.
But they are actually also a product from the past. Measuring values has always been part of the history of technological development – providing a basis for prompting reactions, in other words changing conditions, or at least serving as a warning for such.
One example is the Bosch bell. This might not sound like a sensor, but it is. Launched by Bosch in the spring of 1923, it was an alarm device for automobiles that warned of falling pressure and the risk of a flat tire by ringing a loud bell.
If air escaping from the tire caused the rim to sink, the clapper attached to it would start dragging on the road surface with every turn of the wheel and, via a hinge, hit the top of a bell. This was an important idea at the time, as the shortage of natural rubber in Europe after the First World War (1914-18) made automobile tires expensive. Bosch recognized a gap in the market and offered five “Bosch bells” (one for the spare) for the price of one car tire.
Saving and reducing
The capabilities of sensors would come to play an increasingly important role at Bosch – but only many decades later. The next signs of their presence can be found in the pressure sensors that were used in the first electronically controlled fuel injection systems from 1967 on. The pressure reading, which was communicated to the control unit, determined the amount of fuel injected so as to ensure optimum combustion – in other words, achieving the lowest possible consumption and exhaust emissions.
After this came sensors for measuring air volume or the oxygen content in exhaust gas. The lambda sensor designed by Bosch and unveiled in 1976 was an intrinsic component for treating the exhaust gases from the three-way catalytic converters that had just become standard.
Shrinking and improving
During the 1980s, the use of sensors at Bosch became a significant sales driver. Then as components in the automotive industry became smaller and smaller, this was bound to spark ideas of miniaturizing sensors, too. After all, they would need to fit into the housings of even the smallest control units. A Bosch engineering team started working on miniaturized successors to the mechanical sensors in 1987, but it was an extremely complex technical challenge. Consequently, it took about six years until they were ready for series production in 1993, and they eventually went into mass production at Bosch from 1995. There were good reasons for this.
A micromechanical construction eventually proved a feasible concept. But these sensors, known as “MEMS” (microelectromechanical systems), which were smaller than a pea, needed equipping with moving parts. One example is a “rocker”, which changes position in response to movement, such as in the event of a collision to indicate whether the airbag has to be deployed or not. A research team at Bosch succeeded in manufacturing these tiny structures at the start of the 1990s using a completely new process called plasma etching, which is standard for MEMS these days – and became known throughout the world of micromechanics as the “Bosch process”. This made it possible to produce MEMS in large quantities.
Not only for automobiles
These little helpers quickly caught on and appeared in almost all electronically controlled or supported functions in automobiles – in gasoline and diesel injection and in anti-lock braking and driver assistance systems, for instance in the ESP electronic stability program. But their small size – by ten years ago, the smallest sensors were just 2.5 millimeters long and wide – gave creative minds the idea of using them in entirely different devices, too.
So in 2005 Bosch founded the subsidiary Bosch Sensortec in order to further develop these micromechanical sensors for laptops, games consoles, and eventually also for smartphones and tablets, which were launched onto the market in subsequent years and soon manufactured in vast quantities. They notice when a tablet’s screen is rotated from landscape to portrait, or can halt the hard drive in the blink of an eye if a laptop falls off a table.
Nowadays, Bosch manufactures approximately four millions of micromechanical sensors each day, making it the global market leader. And the annual production rate will undoubtedly continue to climb. Not only as a result of the growing demand for sensors in automobiles, but also because more and more everyday technology will function without human input. And this also calls for these little helpers. After all, before the heating can switch itself on or a skylight close automatically, a sensor has to register a sharp drop in temperature or raindrops.
Since 1998 I have been at Bosch. I am deputy head of the Historical Communications department, working as spokesperson and researcher. I am in charge of product history requests, take care of contacts to technology and transportation museums, and I am in charge of history-related topics in Asia Australia, and Africa.
Before joining Bosch, I studied in history and philosophy at Universities of Konstanz and Hamburg. After graduating, I was editor of a scientific journal and research associate at Deutsches Technikmuseum Berlin. |
OhmConnect cited data from the November 2021 EPA publication From Farm to Kitchen: The Environmental Impacts of U.S. Food Waste to look at the environmental impact of the wasted food, as well as the resources wasted in the lost food's production.
Much of this farmland is used to raise livestock and grow corn and soybeans. But not all of it is used to produce foodstuffs for direct human consumption—a lot of it is used to produce food for livestock. This makes livestock and other animal production farms and facilities ancillary beneficiaries of U.S. farming. Agriculture, food production, and related industries (such as food manufacturing and retailing) were responsible for $1.055 trillion of the United States' gross domestic product in 2020—5% of the overall GDP.
To look at the environmental impact of domestic food waste, OhmConnect cited data from the EPA publication From Farm to Kitchen: The Environmental Impacts of U.S. Food Waste, released in November 2021.
One-third of all food produced annually is unconsumed and simply becomes waste. This also means that the resources used to produce that food in the supply chain—water, pesticides, gas or diesel used for freight and delivery, and energy for refrigeration—are also wasted.
The U.S. Environmental Protection Agency concluded that the U.S. wastes between 161 and 335 billion pounds of food per year, equal to anywhere from 492 to 1,032 pounds per person annually. To translate this figure into something most people are aware of and many actively keep track of, this equates to as much as 1,520 calories per person per day wasted, or enough food to feed 150 million people.
Food loss and waste per person increased over the last decade and tripled since 1960. Fruits and vegetables are among the foods that go to waste most often, and the consumption stage—typically at home or in restaurants—is responsible for approximately half of that waste.
Every type of food is wasted most during the consumption stage, which occurs in homes, restaurants, and other food service establishments. A 2020 study projecting the environmental benefits of cutting the U.S.'s food loss and waste in half found that addressing households, restaurants, and food processing would have the biggest effect on the environment, whereas addressing institutional food service or retail would have a minimal environmental impact.
According to Brian Roe, professor and faculty lead at the Ohio State Food Waste Collaborative, the average American family can put thousands of dollars of food in the trash each year.
An American Journal of Agricultural Economics study published in 2020 found the loss to be $240 billion in total in homes nationally, breaking down to $1,866 per household—though based on the most current U.S. Census' findings of the total number of U.S. households, that figure is closer to $1,961 per household.
- Agricultural land wasted: 19,000 square feet
- Water: 19,000 gallons
- Pesticides: 2.5 pounds
- Fertilizer: 44.5 pounds
- Energy: 2,140 kilowatt-hours
- Greenhouse gas emissions: 1,190 pounds CO2
The issue with food loss and waste isn't just about what ends up in the trash can. It's about the loss and waste of everything that went into that potato, or banana, or onion—the water, the land, the pesticides, the fertilizer, and the energy add up to a greater, compounded loss.
To determine the environmental impact of food loss and waste, researchers consider how much food is lost or wasted, the type of food it is, and where in the supply chain it was wasted. The further along the supply chain food is wasted, the greater the impact on the environment because impacts are cumulative.
All told, the greenhouse gas emissions from one person's wasted food annually are equivalent to those from the average passenger car driving 1,336 miles. And the estimated water wasted is roughly what an average American household uses over the course of 63 days.
Ninety percent of food wasted in the supply chain is edible, with inedible things like bones and shells making up the other 10%. Studies put the number of wasted calories per day between 1,100 and 1,520, a sizable portion of the recommended daily caloric intake.
This waste ends up in a landfill. According to the EPA, food waste is the nation's most commonly found material burned at landfills—it accounts for 24% and 22% of landfilled and combusted municipal solid waste, respectively.
North Americans waste more than three times what people waste in the Middle East, North Africa, Latin America, and the Caribbean, and more than 10 times what people in South Asia and sub-Saharan Africa waste.
When looking at food waste and loss by regional wealth, people in the U.S. waste 503 grams per person per day—196 grams more than those in other high-income countries. Food loss decreases as regional wealth decreases: People in low-income countries waste just 43 grams per person per day.
Country by country, the U.S. is surpassed by only two in the generation of food waste (China and India) and two in food waste per person (New Zealand and Ireland).
Fruits and vegetables make up 80% of all food loss and waste in sub-Saharan Africa and 64% of all food loss and waste in industrialized Asia. In North America and Oceania, they make up about half of all wasted food. According to the Food and Agriculture Organization, up to 60% of all fruits and vegetables find their way into landfills.
Some of the food waste is attributed to the financial, technical, and managerial constraints of producing food in countries with a less developed infrastructure, as well as underdeveloped food distribution networks and poor harvest and handling technology and techniques. These together result in billions of dollars in losses yearly. Much of the waste is also attributable to the demand for "perfect" fruits and vegetables.
Cutting the nation's food loss and waste in half could meaningfully conserve resources and reduce the environmental impacts of the food system, according to the EPA. By halving the food loss and waste across the country, the U.S. could lessen the environmental footprint by 3.2 trillion gallons of water as well as 262 billion kWh of energy—that's enough to power 21.5 million U.S. homes for a year.
This story originally appeared on OhmConnect and was produced and distributed in partnership with Stacker Studio.
No comments on this item Please log in to comment by clicking here |
oropendola, (genus Psarocolius), any of several birdspecies of the blackbird family (Icteridae) that are common to the canopy of New World tropical forests and known (along with the caciques) for their hanging nests, which may measure up to 2 metres (6.6 feet) long.
Both sexes are largely black or greenish, sometimes with touches of red or brown. Oropendolas have rounded yellow tails and heavy bills swollen at the base, to form frontal shields. Males are about 30–50 cm (12–20 inches) long and are more than 50 percent larger than females.
Most species of oropendola breed in colonies, some of which can contain 100 nests hanging from the branches or fronds of a single tree. The colony members select a large tree that is isolated, presumably to reduce the chance that a monkey or other arboreal predator can climb into the colony and raid the nests for eggs and young. Only the female builds the long, windsocklike nest. Inside this woven structure she incubates two white eggs and then feeds the hatchlings.
Males guard the colonies by day but roost separately at night. During courtship, males perform elaborate songs, some of which sound like slashing whips and gurgling chortles. In each colony there is one dominant bird, the alpha male, who acquires the large majority of matings with females. The second-ranked bird, or beta male, acquires only a few matings, whereas lower-ranked males may not mate at all. Males court females in the nest tree and monitor the progression of nest building, so that they can anticipate when each female will be receptive for mating. Subordinate males must harass and pursue females away from the nest tree because the alpha male rules the nest tree where most matings occur.
Oropendolas roam the forest in groups of 2 to 20, searching for fruit and insects. They frequently search for clusters of dead leaves snagged in the canopy and tear the clusters open to feed on hiding spiders and insects. In this process, oropendolas use a search method unique to blackbirds. Unlike most birds, blackbirds have muscles that allow them to open their bill with power, rather than only close it with power. Thus, a foraging technique commonly used by oropendolas is to insert the tip of the bill into a dead leaf cluster and pry it open with the bill so they can peer inside and look for prey.
The most widely distributed species is the crested oropendola (Psarocolius decumanus), found from Panama to Argentina.
Get a Britannica Premium subscription and gain access to exclusive content. |
How often have you found yourself in a situation where you need to measure an object and there is no tape measure available? It's frustrating when you realize that the only way to get by without one is to guess, which can easily lead to errors. If this has happened more than once, then read on!
In this blog post we'll cover how to read a tape measure as well as some other useful tips about using tape measures that will help make your construction or DIY experience more enjoyable.
Table of Contents
- What Is a Tape Measure?
- Tape Measure Basics
- How to Read a Tape Measure?
- Why Should You Always Use The Metric System When Measuring?
- How to Make Sure That What you're Measuring Is Accurate?
- 10 Amazing Things You Don't Know about Measuring Tape
- Tips and Tricks for Using a Tape Measure
What Is a Tape Measure?
A tape measure is a simple tool used to gauge the distance from one point to another. There are many different types of tape measures, each with its advantages. A metal or flat-tape measure can be used for more precise measuring and usually has the range listed on the side. This type is often denoted as "F" for feet and 'I' for inches so that an 8-foot long metal tape would read "8 Ft." There are also retractable ones that do not have a range listed on them because they're measured in either centimeters or millimeters (whichever you prefer). The most common variety is cable style; these come in both lengths (usually 100 or 150 meters) and widths (usually 16mm or 25mm).
Tape Measure Basics
A tape measure is a piece of equipment used to determine an object's linear length, height, or circumference. It consists of a long strip which is rolled according to the desired measurements, historically made from fabric and now usually plastic; metal was formerly also used.
Tape measures that are 25' or longer are called surveyors' tape measures. Measuring tapes can be classified broadly as either "flat" or "curved." The flat variety has only one rigid edge and measures the same width up to its blade edge. Curved measuring tapes have two rigid edges that allow them gauging from a point inside or outside of either edge.
The first official standardized measuring tape was issued in France by King Louis XIII on June 18, 1812.
How to Read a Tape Measure?
A person needs to know how to read the measurements on their tape measure. There are two types: Metric and Imperial tape measure, but they're both very easy!
To use metric measurement, immediately convert it into standard imperial units with one simple step- divide by 10. If your meter is 4 meters long, that equals 40 feet or 12 yards/10'. With an imperial unit, you keep all of the same numbers in place; just replace "meters" with "yards." For example, if my yardstick measures 3 inches wide, I can see right away from looking at this number alone that there will be 30 inches per foot (3 x 1 = 3). The following sentence shows us how we measure either type of tape
1. How to Read a Metric Tape Measure?
The metric tape measure should have a linear scale printed on which most of the measurements are in millimeters. The inch (or centimeter) fractions of inches and centimeters up to 12 inches or 18 cm next to the millimeter measurements give you other easy conversion values and are found on one side of the blade. If not, they can be easily extrapolated from the endpoints with a calculator. However, you also need knowledge about how to read that scale properly, so you know if it is accurate or not suitable? To read it, first find out where the measuring point is by looking at where one side's numbers change. This could mean at each whole centimeter or every half-inch, depending on your tape measure.
2. How to Read an Imperial Tape Measure?
Many people have no idea what the numbers on an imperial tape measure mean. But if you are rough carpentry or gardening, you must know.
The meters and centimeters in red on the left side of the length markings are for less precise measurements (say for general installation projects). The meters and centimeters in blue at the right are more precise. There's also a green dot indicating where to place your thumb when getting an accurate measurement over uneven ground or into tight spaces.
With these guidelines, anybody can take measurements with precision! In other words, different colors represent different scales for measuring length, which is not always obvious from looking at just the color of the marks; there is a green line that indicates where to place
Why Should You Always Use The Metric System When Measuring?
The metric system is international, has unending decimal subdivisions and a unified way of representing quantities. So even if you're in the US and use the imperial system, your reading will be more likely an approximation as opposed to a pure headcount.
In addition, imperial measurements can only express values for smaller objects like screws, nails, bolts, etc., which can confuse when talking about carpentry.
One advantage that imperial units have over metric units is that they are designed to work well with human hands instead of "rigid" digital instruments or machinery where precise readings are required.
How to Make Sure That What you're Measuring Is Accurate?
What makes a measurement accuracy is how close it is to the true value or the so-called actual value, and there are three ways to ensure that a particular measurement has that kind of accuracy:
- Plan your experiment so that your measurements have an adequate degree of statistical precision,
- Standardize your measuring instruments over time and conditions,
- Use good judgment when interpreting data.
For a measurement to have any chance of being accurate, there should be consistency between those points that pertain to the measurement. So you can't just measure one thing and report it as fact; you have to establish a set of standards, whether they be the National Institute of Standards or some other standardization body, to ensure that the data which is returned has been processed appropriately for methods used and equipment employed.
10 Amazing Things You Don't Know about Measuring Tape
A measuring tape is one of the most valuable tools in your DIY arsenal. It's also a lot more versatile than you might realize! The following are ten facts that we bet you didn't know about this indispensable tool:
It's no secret that tape measures are a handy tool to use when working with building materials, but did you know the curvature of your blade helps keep it rigid and easier for reading? In addition, the curve allows the blade to "stand out" while measuring, which is helpful in cases where there aren't any flat surfaces.
A color change or graphical identifier every 12" on measuring tape is invaluable in identifying the right spot for studs. Stud spacing can be 16", 24", and even 36". When you use a shorter length, fewer studs are needed to hold up your plywood sheet because of these "extra supports."
Your tape measure will come in handy when you are building a house. It has lots of red numbers to help you get the spacing just right! With 16-inch intervals, six supports can fit into an 8-foot space, which is perfect for installing sheets of plywood without cutting them down.
The black diamonds on the tape are known as stud or joist marks. You'll often see them at 16" intervals and 19.2" intervals, which is because each mark corresponds to one of these measurements: The center of a typical board measures 8ft, so when boards are nailed together for an American building, it's common practice to use 12' lengths (8 ft + 4 inches). That way, two people can work with just about any equipment they need to cut pieces that fit into this specific size without wasting materials while doing other jobs such as nailing boards up onto walls where nails have already been hammered in place.
Tape Measure Nail Grab will be the thing that saves you if your partner is unavailable to hold one end of a measuring tape. If, for some reason, they are not there or unable to do so, simply insert and hammer in a nail on either side (or any other type of fastener) at the designated distance from each other and then hook onto it with Tape Measure Nail grab!
The reason there is an odd-looking hook on the end of your tape measure has to do with how it can be hooked onto anything in any direction. The shape provides you more opportunities than a regular square or circular shaped hook would, which means you'll have an easier time getting those tough measurements!
Did you know the bottom of a measuring tape's end hook is sometimes serrated? This makes it easier if there are no marking tools around to use. All you have to do for this trick is run your thumb along either side of the serration to make a mark on whatever you're measuring.
When measuring an inside area, the thickness of metal or other materials in your calculations should be considered. This is because these metals will shift to fill any gaps and give a more accurate measurement. Always make sure you pull or push the tape so that it's taut when taking measurements for this reason as well- if not done correctly, your gauge may measure incorrectly due to slack around its edges.
Tape measures are one of the most common tools in a workshop. They're used to measure everything from walls and doorways to studs and rafters! But with all that measuring going on, it can get complicated for workers who need their hands free. Thankfully, modern tape measures have come equipped with a rare earth magnet at the end, which is great because now they don't need both hands anymore!
Who knew your tape measure had secret measurements on the back? The width of a space you're measuring is printed right there! So what does this mean for your next project? If it's an inside job, take care to include that measurement when calculating how long something needs to be. This will save you time and hassle, so jump in today with confidence knowing everything can fit just like it's supposed to.
Tips and Tricks for Using a Tape Measure
If you are a beginning DIYer, chances are the only thing that comes to mind when thinking about measuring is how big the space might be. That's where tapes come in handy! Here are some tips and tricks for using tape measures so your measurements will always be accurate:
Measure double fold:
To measure the width of the fabric, roll the fabric and tape it at each end. This method will give you an accurate measurement that includes the fullness of the folded cloth. Add extra for ample folds or gathers.
When measuring, let your thumb stand just above the top edge of your tape measure at all times to guard against over-measuring. The tape should be pulled snugly across any crease on your body (such as under your arm) and then released; otherwise, it will pull in slightly as you take up the slack with your thumb. Most people have 1/2 inch on either side of their actual size, so keep this in mind when ordering clothes online as well!
Study the tape before using:
Measure a short, straight line twice to ensure the first measurement was accurate and you aren't pulling too much. Then measure a longer stretch of fabric or several curves (for example, over your shoulder) without stopping in the middle. Finally, measure a long piece of cloth with lengthwise stripes or slanting stripes by pulling one end only tautly as you stretch across to the other side with each end held taut equally.
Make it easier to read small measurements:
Pin a thread tail on either side at 1/2-inch intervals along the center crease of your tape measure. This will give you fractional increments if needed for measuring nudie necklines and cuffs and ¼-inch increments for topstitching.
Measure multiple layers:
To measure the length of several layers, place a pin perpendicular to the tape on each layer and pull the pin out when you reach it from the other side of the cloth. This will stop your tape from pulling in. If you measure a bulky fabric in this way, make sure to first subtract 1/2 inch per every 2 inches of thickness for correct readings-for example, if you count upholstery-weight velvet that's 4 inches thick, You should add 3 inches (two 2-inch layers and one 2 inches thick fold) to get the proper measurement.
Use a pencil while taking measurements to be erased or rubbed off easily before using pins to mark the pattern on the fabric.
Measure inseams and lengths:
When measuring inside leg seams and armholes, pin a thread tail at 1/2-inch intervals on either side of the tape so you can have even ¼-inch increments when notching patterns or stitching.
Check your measurements:
After you've finished measuring, check the two numbers against each other to make sure they match up. If one is off from the other, that's an indication that you either pulled too much or not enough when taking a measurement (likely because of creases or folds in fabric) and should recheck before cutting patterns out. Keep in mind how tightly or loosely things are made when ordering clothes online as well!
When measuring long pieces
Take more than one reading if possible while pulling tautly with both hands; then average them together for a more accurate result. For example, it may be better to measure the inside leg of pants with ¼-inch seam allowances three times [pulling tautly between each one] than to measure once and hope you didn't encounter a crease or fold in the fabric.
To save yourself time, later on, if your measurements are close together, use pins at 1/2-inch intervals along with the tape for even ¼-inch seam allowances.
Press open pleats gather and folds:
When measuring these items used in clothing construction such as skirts, trousers, etc., pull one end tight across itself and tighten both ends evenly while gently easing the folds around corners. This will help avoid pulling extra instead of taking actual measurements!
Save time when you're not tailoring: Use the tailor's trick of pinning thread tails with an "X" pattern instead of making indentations.
When it comes to the construction industry, many different types of measurements need to be taken. To ensure accuracy, you'll want to read your tape measure correctly every time. The article provides a complete guide on correctly reading a tape measure, so you're never left guessing again when taking these critical measurements in the future! |
The times and amplitude of tides at a locale are influenced by the alignment of the Sun and Moon, by the pattern of tides in the deep ocean, by the amphidromic systems of the oceans, and the shape of the coastline and near-shore bathymetry (see Timing). Some shorelines experience a semi-diurnal tide—two nearly equal high and low tides each day. Other locations experience a diurnal tide—only one high and low tide each day. A "mixed tide"—two uneven tides a day, or one high and one low—is also possible.
Tides vary on timescales ranging from hours to years due to a number of factors. To make accurate records, tide gauges at fixed stations measure the water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level.
While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to forces such as wind and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts.
Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the solid part of the Earth is affected by tides, though this is not as easily seen as the water tidal movements.
Tide changes proceed via the following stages:
- Sea level rises over several hours, covering the intertidal zone; flood tide.
- The water rises to its highest level, reaching high tide.
- Sea level falls over several hours, revealing the intertidal zone; ebb tide.
- The water stops falling, reaching low tide.
Tides produce oscillating currents known as tidal streams. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water. But there are locations where the moments of slack tide differ significantly from those of high and low water.
Tides are commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the equator.
Tidal constituents are the net result of multiple influences impacting tidal changes over certain periods of time. Primary constituents include the Earth's rotation, the position of the Moon and Sun relative to the Earth, the Moon's altitude (elevation) above the Earth's equator, and bathymetry. Variations with periods of less than half a day are called harmonic constituents. Conversely, cycles of days, months, or years are referred to as long period constituents.
The tidal forces affect the entire earth, but the movement of the solid Earth is only centimeters. The atmosphere is much more fluid and compressible so its surface moves kilometers, in the sense of the contour level of a particular low pressure in the outer atmosphere.
Principal lunar semi-diurnal constituent
In most locations, the largest constituent is the "principal lunar semi-diurnal", also known as the M2 (or M2) tidal constituent. Its period is about 12 hours and 25.2 minutes, exactly half a tidal lunar day, which is the average time separating one lunar zenith from the next, and thus is the time required for the Earth to rotate once relative to the Moon. Simple tide clocks track this constituent. The lunar day is longer than the Earth day because the Moon orbits in the same direction the Earth spins. This is analogous to the minute hand on a watch crossing the hour hand at 12:00 and then again at about 1:05½ (not at 1:00).
The Moon orbits the Earth in the same direction as the Earth rotates on its axis, so it takes slightly more than a day—about 24 hours and 50 minutes—for the Moon to return to the same location in the sky. During this time, it has passed overhead (culmination) once and underfoot once (at an hour angle of 00:00 and 12:00 respectively), so in many places the period of strongest tidal forcing is the above-mentioned, about 12 hours and 25 minutes. The moment of highest tide is not necessarily when the Moon is nearest to zenith or nadir, but the period of the forcing still determines the time between high tides.
Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to "stretch" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally. As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to "catch up" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height.
Semi-diurnal range differences
Range variation: springs and neaps
The semi-diurnal range (the difference in height between high and low waters over about half a day) varies in a two-week cycle. Approximately twice a month, around new moon and full moon when the Sun, Moon, and Earth form a line (a condition known as syzygy), the tidal force due to the sun reinforces that due to the Moon. The tide's range is then at its maximum; this is called the spring tide. It is not named after the season, but, like that word, derives from the meaning "jump, burst forth, rise", as in a natural spring.
When the Moon is at first quarter or third quarter, the sun and Moon are separated by 90° when viewed from the Earth, and the solar tidal force partially cancels the Moon's. At these points in the lunar cycle, the tide's range is at its minimum; this is called the neap tide, or neaps. Neap is an Anglo-Saxon word meaning "without the power", as in forđganges nip (forth-going without-the-power).
Spring tides result in high waters that are higher than average, low waters that are lower than average, 'slack water' time that is shorter than average, and stronger tidal currents than average. Neaps result in less-extreme tidal conditions. There is about a seven-day interval between springs and neaps.
- Spring tide: Sun and Moon on the same side (zero degree)
- Neap tide: Sun and Moon at 90 degrees
- Spring tide: Sun and Moon at opposite sides
- Neap tide: Sun and Moon at 270 degrees
- Spring tide: Sun and Moon at the same side (end of cycle)
The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Every 7 1⁄2 lunations (the full cycles from full moon to new to full), perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. Even at its most powerful this force is still weak, causing tidal differences of inches at most.
The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast. For example, the high tide at Norfolk, Virginia, U.S., predictably occurs approximately two and a half hours before the Moon passes directly overhead.
Land masses and ocean basins act as barriers against water moving freely around the globe, and their varied shapes and sizes affect the size of tidal frequencies. As a result, tidal patterns vary. For example, in the U.S., the East coast has predominantly semi-diurnal tides, as do Europe's Atlantic coasts, while the West coast predominantly has mixed tides.
These include solar gravitational effects, the obliquity (tilt) of the Earth's equator and rotational axis, the inclination of the plane of the lunar orbit and the elliptical shape of the Earth's orbit of the sun.
Phase and amplitude
Because the M2 tidal constituent dominates in most locations, the stage or phase of a tide, denoted by the time in hours after high water, is a useful concept. Tidal stage is also measured in degrees, with 360° per tidal cycle. Lines of constant tidal phase are called cotidal lines, which are analogous to contour lines of constant altitude on topographical maps. High water is reached simultaneously along the cotidal lines extending from the coast out into the ocean, and cotidal lines (and hence tidal phases) advance along the coast. Semi-diurnal and long phase constituents are measured from high water, diurnal from maximum flood tide. This and the discussion that follows is precisely true only for a single tidal constituent.
For an ocean in the shape of a circular basin enclosed by a coastline, the cotidal lines point radially inward and must eventually meet at a common point, the amphidromic point. The amphidromic point is at once cotidal with high and low waters, which is satisfied by zero tidal motion. (The rare exception occurs when the tide encircles an island, as it does around New Zealand, Iceland and Madagascar.) Tidal motion generally lessens moving away from continental coasts, so that crossing the cotidal lines are contours of constant amplitude (half the distance between high and low water) which decrease to zero at the amphidromic point. For a semi-diurnal tide the amphidromic point can be thought of roughly like the center of a clock face, with the hour hand pointing in the direction of the high water cotidal line, which is directly opposite the low water cotidal line. High water rotates about the amphidromic point once every 12 hours in the direction of rising cotidal lines, and away from ebbing cotidal lines. This rotation is generally clockwise in the southern hemisphere and counterclockwise in the northern hemisphere, and is caused by the Coriolis effect. The difference of cotidal phase from the phase of a reference tide is the epoch. The reference tide is the hypothetical constituent "equilibrium tide" on a landless Earth measured at 0° longitude, the Greenwich meridian.
In the North Atlantic, because the cotidal lines circulate counterclockwise around the amphidromic point, the high tide passes New York Harbor approximately an hour ahead of Norfolk Harbor. South of Cape Hatteras the tidal forces are more complex, and cannot be predicted reliably based on the North Atlantic cotidal lines.
History of tidal physics
Investigation into tidal physics was important in the early development of heliocentrism and celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the sun's gravity.
Seleucus of Seleucia theorized around 150 B.C. that tides were caused by the Moon.
Medieval understanding of the tides was primarily based on works of Muslim astronomers, which became available through Latin translation starting from the 12th century. Abu Ma'shar (d. circa 886), in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the moon (although earlier in Europe, Bede (d. 736) also reckoned that the moon is involved). Abu Ma'shar discussed the effects of wind and moon's phases relative to the sun on the tides. In the 12th century, al-Bitruji (d. circa 1204) contributed the notion that the tides were caused by the general circulation of the heavens.
Simon Stevin in his 1608 De spiegheling der Ebbenvloet, The theory of ebb and flood, dismissed a large number of misconceptions that still existed about ebb and flood. Stevin pleaded for the idea that the attraction of the Moon was responsible for the tides and spoke in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made.
In 1609 Johannes Kepler also correctly suggested that the gravitation of the Moon caused the tides, which he based upon ancient observations and correlations. It was originally mentioned in Ptolemy's Tetrabiblos as having derived from ancient observation.
Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the sun. He hoped to provide mechanical proof of the Earth's movement. The value of his tidal theory is disputed. Galileo rejected Kepler's explanation of the tides.
Isaac Newton (1642–1727) was the first person to explain tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687) and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces. Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by ocean depth, the Earth's rotation, and other factors.
Maclaurin used Newton's theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three-dimensional oval) with major axis directed toward the deforming body. Maclaurin was the first to write about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation.
In 1770 James Cook's barque HMS Endeavour grounded on the Great Barrier Reef. Attempts were made to refloat her on the following tide which failed, but the tide after that lifted her clear with ease. Whilst she was being repaired in the mouth of the Endeavour River Cook observed the tides over a period of seven weeks. At neap tides both tides in a day were similar, but at springs the tides rose 7 feet (2.1 m) in the morning but 9 feet (2.7 m) in the evening.
Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, the first major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves.
Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use.
The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass. The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the Sun is on average 389 times farther from the Earth, its field gradient is weaker. The solar tidal force is 46% as large as the lunar. More precisely, the lunar tidal acceleration (along the Moon–Earth axis, at the Earth's surface) is about 1.1 × 10−7 g, while the solar tidal acceleration (along the Sun–Earth axis, at the Earth's surface) is about 0.52 × 10−7 g, where g is the gravitational acceleration at the Earth's surface. Venus has the largest effect of the other planets, at 0.000113 times the solar effect.
The ocean's surface is closely approximated by an equipotential surface, (ignoring ocean currents) commonly referred to as the geoid. Since the gravitational force is equal to the potential's gradient, there are no tangential forces on such a surface, and the ocean surface is thus in gravitational equilibrium. Now consider the effect of massive external bodies such as the Moon and Sun. These bodies have strong gravitational fields that diminish with distance and act to alter the shape of an equipotential surface on the Earth. This deformation has a fixed spatial orientation relative to the influencing body. The Earth's rotation relative to this shape causes the daily tidal cycle. Gravitational forces follow an inverse-square law (force is inversely proportional to the square of the distance), but tidal forces are inversely proportional to the cube of the distance. The ocean surface moves because of the changing tidal equipotential, rising when the tidal potential is high, which occurs on the parts of the Earth nearest to and furthest from the Moon. When the tidal equipotential changes, the ocean surface is no longer aligned with it, so the apparent direction of the vertical shifts. The surface then experiences a down slope, in the direction that the equipotential has risen.
Laplace's tidal equations
- The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow.
- The forcing is only horizontal (tangential).
- The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity.
- The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively.
The boundary conditions dictate no flow across the coastline and free slip at the bottom.
The Coriolis effect (inertial force) steers flows moving towards the equator to the west and flows moving away from the equator toward the east, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity.
Amplitude and cycle time
The theoretical amplitude of oceanic tides caused by the moon is about 54 centimetres (21 in) at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the moon's orbit. The sun similarly causes tides, of which the theoretical amplitude is about 25 centimetres (9.8 in) (46% of that of the moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of 79 centimetres (31 in), while at neap tide the theoretical level is reduced to 29 centimetres (11 in). Since the orbits of the Earth about the sun, and the moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–sun and Earth–moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the moon and ±5% for the sun. If both the sun and moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach 93 centimetres (37 in).
Real amplitudes differ considerably, not only because of depth variations and continental obstacles, but also because wave propagation across the ocean has a natural period of the same order of magnitude as the rotation period: if there were no land masses, it would take about 30 hours for a long wavelength surface wave to propagate along the equator halfway around the Earth (by comparison, the Earth's lithosphere has a natural period of about 57 minutes). Earth tides, which raise and lower the bottom of the ocean, and the tide's own gravitational self attraction are both significant and further complicate the ocean's response to tidal forces.
Earth's tidal oscillations introduce dissipation at an average rate of about 3.75 terawatts. About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about 3.8 centimetres (1.5 in)/year, lengthening the terrestrial day. Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year.
Observation and prediction
From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the sun and moon. Pytheas travelled to the British Isles about 325 BC and seems to be the first to have related spring tides to the phase of the moon.
In the 2nd century BC, the Babylonian astronomer, Seleucus of Seleucia, correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the sun.
The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast.
The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London.
William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s.
The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gauge stations by 1850.
William Whewell first mapped co-tidal lines ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of amphidromes where co-tidal lines meet in the mid-ocean. These points of no tide were confirmed by measurement in 1840 by Captain Hewett, RN, from careful soundings in the North Sea.
The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases of the moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age.
The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases; the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry, and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of 16.3 metres (53 ft) and a highest predicted extreme of 17 metres (56 ft). Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of 16.2 metres (53 ft) and a highest predicted extreme of 16.8 metres (55 ft). Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge, but Ungava Bay is free of pack ice for only about four months every year while the Bay of Fundy rarely freezes.
Southampton in the United Kingdom has a double high water caused by the interaction between the M2 and M4 tidal constituents. Portland has double low waters for the same reason. The M4 tide is found all along the south coast of the United Kingdom, but its effect is most noticeable between the Isle of Wight and Portland because the M2 tide is lowest in this region.
Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome.
Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for detailed understanding. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of the instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated over many days. Precise results require detailed knowledge of the shape of all the ocean basins—their bathymetry and coastline shape.
Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of sun and moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found.
The main patterns in the tides are
- the twice-daily variation
- the difference between the first and second tide of a day
- the spring–neap cycle
- the annual variation
The Highest Astronomical Tide is the perigean spring tide when both the sun and the moon are closest to the Earth.
When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides.
For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the moon, and the angles that define the shape and location of their orbits.
For tides, then, harmonic analysis is not limited to harmonics of a single frequency. In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid.
The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form
where A is the amplitude, ω is the angular frequency usually given in degrees per hour corresponding to t measured in hours, and p is the phase offset with regard to the astronomical state at time t = 0 . There is one term for the moon and a second term for the sun. The phase p of the first harmonic for the moon term is called the lunitidal interval or high water interval. The next step is to accommodate the harmonic terms due to the elliptical shape of the orbits. Accordingly, the value of A is not a constant but also varying with time, slightly, about some average figure. Replace it then by A(t) where A is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. Accordingly,
which is to say an average value A with a sinusoidal variation about it of magnitude Aa, with frequency ωa and phase pa. Thus the simple term is now the product of two cosine factors:
Given that for any x and y
it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is .) Consider further that the tidal force on a location depends also on whether the moon (or the sun) is above or below the plane of the equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term.
Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide.
Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, moon and sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period).
In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand, M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.)
The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components.
Because the moon is moving in its orbit around the earth and in the same sense as the Earth's rotation, a point on the earth must rotate slightly further to catch up so that the time between semidiurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides.
When the Earth, moon, and sun are in line (sun–Earth–moon, or sun–moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle moon–Earth–sun is close to ninety degrees, neap tides result. As the moon moves around its orbit it changes from north of the equator to south of the equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the moon is above the equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again.
The tides' influence on current flow is much more difficult to analyse, and data is much more difficult to collect. A tidal height is a simple number which applies to a wide region simultaneously. A flow has both a magnitude and a direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel is the same flow, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction.
Nevertheless, current analysis is similar to tidal analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights.
In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction.
Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away.
As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome.
The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods.
A further complication for Cook Strait's flow pattern is that the tide at the north side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the south side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier.
The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for 41°13·9’S 174°29·6’E (north west of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north end (Nelson) has a counterpart spring tide at the south end (Wellington), so the resulting behaviour follows neither reference harbour.
Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.
Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief.
Tidal flows are important for navigation, and significant errors in position occur if they are not accommodated. Tidal heights are also important; for example many rivers and harbours have a shallow "bar" at the entrance which prevents boats with significant draft from entering at low tide.
Until the advent of automated navigation, competence in calculating tidal effects was important to naval officers. The certificate of examination for lieutenants in the Royal Navy once declared that the prospective officer was able to "shift his tides".
Tidal flow timings and velocities appear in tide charts or a tidal stream atlas. Tide charts come in sets. Each chart covers a single hour between one high water and another (they ignore the leftover 24 minutes) and show the average tidal flow for that hour. An arrow on the tidal chart indicates the direction and the average flow speed (usually in knots) for spring and neap tides. If a tide chart is not available, most nautical charts have "tidal diamonds" which relate specific points on the chart to a table giving tidal flow direction and speed.
The standard procedure to counteract tidal effects on navigation is to (1) calculate a "dead reckoning" position (or DR) from travel distance and direction, (2) mark the chart (with a vertical cross like a plus sign) and (3) draw a line from the DR in the tide's direction. The distance the tide moves the boat along this line is computed by the tidal speed, and this gives an "estimated position" or EP (traditionally marked with a dot in a triangle).
Nautical charts display the water's "charted depth" at specific locations with "soundings" and the use of bathymetric contour lines to depict the submerged surface's shape. These depths are relative to a "chart datum", which is typically the water level at the lowest possible astronomical tide (although other datums are commonly used, especially historically, and tides may be lower or higher for meteorological reasons) and are therefore the minimum possible water depth during the tidal cycle. "Drying heights" may also be shown on the chart, which are the heights of the exposed seabed at the lowest astronomical tide.
Tide tables list each day's high and low water heights and times. To calculate the actual water depth, add the charted depth to the published tide height. Depth for other times can be derived from tidal curves published for major ports. The rule of twelfths can suffice if an accurate curve is not available. This approximation presumes that the increase in depth in the six hours between low and high water is: first hour — 1/12, second — 2/12, third — 3/12, fourth — 3/12, fifth — 2/12, sixth — 1/12.
Intertidal ecology is the study of ecosystems between the low- and high-water lines along a shore. At low water, the intertidal zone is exposed (or emersed), whereas at high water, it is underwater (or immersed). Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as among the different species. The most important interactions may vary according to the type of intertidal community. The broadest classifications are based on substrates — rocky shore or soft bottom.
Intertidal organisms experience a highly variable and often hostile environment, and have adapted to cope with and even exploit these conditions. One easily visible feature is vertical zonation, in which the community divides into distinct horizontal bands of specific species at each elevation above low water. A species' ability to cope with desiccation determines its upper limit, while competition with other species sets its lower limit.
Humans use intertidal regions for food and recreation. Overexploitation can damage intertidals directly. Other anthropogenic actions such as introducing invasive species and climate change have large negative effects. Marine Protected Areas are one option communities can apply to protect these areas and aid scientific research.
The approximately fortnightly tidal cycle has large effects on intertidal and marine organisms. Hence their biological rhythms tend to occur in rough multiples of this period. Many other animals such as the vertebrates, display similar rhythms. Examples include gestation and egg hatching. In humans, the menstrual cycle lasts roughly a lunar month, an even multiple of the tidal period. Such parallels at least hint at the common descent of all animals from a marine ancestor.
Shallow areas in otherwise open water can experience rotary tidal currents, flowing in directions that continually change and thus the flow direction (not the flow) completes a full rotation in 12 1⁄2 hours (for example, the Nantucket Shoals).
In addition to oceanic tides, large lakes can experience small tides and even planets can experience atmospheric tides and Earth tides. These are continuum mechanical phenomena. The first two take place in fluids. The third affects the Earth's thin solid crust surrounding its semi-liquid interior (with various modifications).
Large lakes such as Superior and Erie can experience tides of 1 to 4 cm, but these can be masked by meteorologically induced phenomena such as seiche. The tide in Lake Michigan is described as 0.5 to 1.5 inches (13 to 38 mm) or 1 3⁄4 inches. This is so small that other larger effects completely mask any tide, and as such these lakes are considered non-tidal.
Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about 80 to 120 kilometres (50 to 75 mi), above which the molecular density becomes too low to support fluid behavior.
Earth tides or terrestrial tides affect the entire Earth's mass, which acts similarly to a liquid gyroscope with a very thin crust. The Earth's crust shifts (in/out, east/west, north/south) in response to lunar and solar gravitation, ocean tides, and atmospheric loading. While negligible for most human activities, terrestrial tides' semi-diurnal amplitude can reach about 55 centimetres (22 in) at the equator—15 centimetres (5.9 in) due to the sun—which is important in GPS calibration and VLBI measurements. Precise astronomical angular measurements require knowledge of the Earth's rotation rate and polar motion, both of which are influenced by Earth tides. The semi-diurnal M2 Earth tides are nearly in phase with the moon with a lag of about two hours.
Galactic tides are the tidal forces exerted by galaxies on stars within them and satellite galaxies orbiting them. The galactic tide's effects on the Solar System's Oort cloud are believed to cause 90 percent of long-period comets.
Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any actual link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides. Many of these usages are historic and refer to the earlier meaning of tide as "a portion of time, a season".
- Clairaut's theorem
- Coastal erosion
- Head of tide
- Hough function
- King tide
- Lunar Laser Ranging Experiment
- Lunar phase
- Marine terrace
- Mean high water spring
- Mean low water spring
- Orbit of the Moon
- Primitive equations
- Tidal island
- Tidal limit
- Tidal locking
- Tidal prism
- Tidal reach
- Tidal resonance
- Tidal river
- Tidal triggering
- Tide pool
- Reddy, M.P.M. & Affholder, M. (2002). Descriptive physical oceanography: State of the Art. Taylor and Francis. p. 249. ISBN 90-5410-706-5. OCLC 223133263.
- Hubbard, Richard (1893). Boater's Bowditch: The Small Craft American Practical Navigator. McGraw-Hill Professional. p. 54. ISBN 0-07-136136-7. OCLC 44059064.
- Coastal orientation and geometry affects the phase, direction, and amplitude of amphidromic systems, coastal Kelvin waves as well as resonant seiches in bays. In estuaries seasonal river outflows influence tidal flow.
- "Tidal lunar day". NOAA. Do not confuse with the astronomical lunar day on the Moon. A lunar zenith is the Moon's highest point in the sky.
- Mellor, George L. (1996). Introduction to physical oceanography. Springer. p. 169. ISBN 1-56396-210-1.
- Tide tables usually list mean lower low water (mllw, the 19 year average of mean lower low waters), mean higher low water (mhlw), mean lower high water (mlhw), mean higher high water (mhhw), as well as perigean tides. These are mean values in the sense that they derive from mean data."Glossary of Coastal Terminology: H–M". Washington Department of Ecology, State of Washington. Retrieved 5 April 2007.
- "Types and causes of tidal cycles". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section).
- Swerdlow, Noel M.; Neugebauer, Otto (1984). Mathematical astronomy in Copernicus's De revolutionibus. 1. Springer-Verlag. p. 76. ISBN 0-387-90939-7.
- "neap²". Oxford English Dictionary (2nd ed.). Oxford University Press. 1989. Old English (example given from AD 469: forđganges nip - without the power of advancing). The Danish niptid is probably from the English. The English term neap-flood (from which neap tide comes) seems to have been in common use by AD 725.
- Plait, Phil (11 March 2011). "No, the "supermoon" didn't cause the Japanese earthquake". Discover Magazine. Retrieved 16 May 2012.
- Rice, Tony (4 May 2012). "Super moon looms Saturday". WRAL-TV. Retrieved 5 May 2012.
- U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section), map showing world distribution of tide patterns, semi-diurnal, diurnal and mixed semi-diurnal.
- Thurman, H.V. (1994). Introductory Oceanography (7th ed.). New York, NY: Macmillan. pp. 252–276.ref
- Ross, D.A. (1995). Introduction to Oceanography. New York, NY: HarperCollins. pp. 236–242.
- Le Provost, Christian (1991). Generation of Overtides and compound tides (review). In Parker, Bruce B. (ed.) Tidal Hydrodynamics. John Wiley and Sons, ISBN 978-0-471-51498-5
- Accad, Y. & Pekeris, C.L. (November 28, 1978). "Solution of the Tidal Equations for the M2 and S2 Tides in the World Oceans from a Knowledge of the Tidal Potential Alone". Philosophical Transactions of the Royal Society of London. Series A. 290 (1368): 235–266. Bibcode:1978RSPTA.290..235A. doi:10.1098/rsta.1978.0083.
- "Tide forecasts". New Zealand: National Institute of Water & Atmospheric Research. Retrieved 2008-11-07. Including animations of the M2, S2 and K1 tides for New Zealand.
- Schureman, Paul (1971). Manual of harmonic analysis and prediction of tides. U.S. Coast and geodetic survey. p. 204.
- Marina Tolmacheva (2014-01-27). Glick, Thomas F., ed. Geography, Chorography. Medieval Science, Technology, and Medicine: An Encyclopedia. Routledge. p. 188. ISBN 9781135459321.
- Simon Stevin - Flanders Marine Institute (pdf, in Dutch)
- Palmerino, The Reception of the Galilean Science of Motion in Seventeenth-Century Europe, pp. 200 op books.google.nl
- Johannes Kepler, Astronomia nova … (1609), p. 5 of the Introductio in hoc opus (Introduction to this work). From page 5: "Orbis virtutis tractoriæ, quæ est in Luna, porrigitur utque ad Terras, & prolectat aquas sub Zonam Torridam, … Celeriter vero Luna verticem transvolante, cum aquæ tam celeriter sequi non possint, fluxus quidem fit Oceani sub Torrida in Occidentem, … " (The sphere of the lifting power, which is [centered] in the moon, is extended as far as to the earth and attracts the waters under the torrid zone, … However the moon flies swiftly across the zenith ; because the waters cannot follow so quickly, the tide of the ocean under the torrid [zone] is indeed made to the west, … )
- Ptolemy with Frank E. Robbins, trans., Tetrabiblos (Cambridge, Massachusetts: Harvard University Press, 1940), Book 1, chapter 2. From chapter 2: "The moon, too, as the heavenly body nearest the earth, bestows her effluence most abundantly upon mundane things, for most of them, animate or inanimate, are sympathetic to her and change in company with her; the rivers increase and diminish their streams with her light, the seas turn their own tides with her rising and setting, … "
- Lisitzin, E. (1974). "2 "Periodical sea-level changes: Astronomical tides"". Sea-Level Changes, (Elsevier Oceanography Series). 8. p. 5.
- "What Causes Tides?". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section).
- See for example, in the 'Principia' (Book 1) (1729 translation), Corollaries 19 and 20 to Proposition 66, on pages 251–254, referring back to page 234 et seq.; and in Book 3 Propositions 24, 36 and 37, starting on page 255.
- Wahr, J. (1995). Earth Tides in "Global Earth Physics", American Geophysical Union Reference Shelf #1. pp. 40–46.
- Leonhard Euler; Eric J. Aiton (28 June 1996). Commentationes mechanicae et astronomicae ad physicam pertinentes. Springer Science & Business Media. pp. 19–. ISBN 978-3-7643-1459-0.
- Thomson, Thomas, ed. (March 1819). "On Capt. Cook's Account of the Tides". Annals of Philosophy. London: Baldwin, Cradock and Joy. XIII: 204. Retrieved 25 July 2015.
- Zuosheng, Y.; Emery, K.O. & Yui, X. (July 1989). "Historical Development and Use of Thousand-Year-Old Tide-Prediction Tables". Limnology and Oceanography. 34 (5): 953–957. doi:10.4319/lo.1989.34.5.0953.
- Cartwright, David E. (1999). Tides: A Scientific History. Cambridge, UK: Cambridge University Press.
- Case, James (March 2000). "Understanding Tides—From Ancient Beliefs to Present-day Solutions to the Laplace Equations". SIAM News. 33 (2).
- Doodson, A.T. (December 1921). "The Harmonic Development of the Tide-Generating Potential". Proceedings of the Royal Society of London. Series A. 100 (704): 305–329. Bibcode:1921RSPSA.100..305D. doi:10.1098/rspa.1921.0088.
- Casotto, S. & Biscani, F. (April 2004). "A fully analytical approach to the harmonic development of the tide-generating potential accounting for precession, nutation, and perturbations due to figure and planetary terms". AAS Division on Dynamical Astronomy. 36 (2): 67.
- Moyer, T.D. (2003) "Formulation for observed and computed values of Deep Space Network data types for navigation", vol. 3 in Deep-space communications and navigation series, Wiley, pp. 126–8, ISBN 0-471-44535-5.
- According to NASA the lunar tidal force is 2.21 times larger than the solar.
- See Tidal force – Mathematical treatment and sources cited there.
- Munk, W.; Wunsch, C. (1998). "Abyssal recipes II: energetics of tidal and wind mixing". Deep-Sea Research Part I. 45 (12): 1977. Bibcode:1998DSRI...45.1977M. doi:10.1016/S0967-0637(98)00070-3.
- Ray, R.D.; Eanes, R.J.; Chao, B.F. (1996). "Detection of tidal dissipation in the solid Earth by satellite tracking and altimetry". Nature. 381 (6583): 595. Bibcode:1996Natur.381..595R. doi:10.1038/381595a0.
- The day is currently lengthening at a rate of about 0.002 seconds per century. Lecture 2: The Role of Tidal Dissipation and the Laplace Tidal Equations by Myrl Hendershott. GFD Proceedings Volume, 2004, WHOI Notes by Yaron Toledo and Marshall Ward.
- Flussi e riflussi. Milano: Feltrinelli. 2003. ISBN 88-07-10349-4.
- van der Waerden, B.L. (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences. 500 (1): 525–545 . Bibcode:1987NYASA.500..525V. doi:10.1111/j.1749-6632.1987.tb37224.x.
- Cartwright, D.E. (1999). Tides, A Scientific History: 11, 18
- "The Doodson–Légé Tide Predicting Machine". Proudman Oceanographic Laboratory. Retrieved 2008-10-03.
- Glossary of Meteorology American Meteorological Society.
- Webster, Thomas (1837). The elements of physics. Printed for Scott, Webster, and Geary. p. 168.
- "FAQ". Retrieved June 23, 2007.
- O'Reilly, C.T.R.; Ron Solvason & Christian Solomon (2005). Ryan, J., ed. "Where are the World's Largest Tides". BIO Annual Report "2004 in Review". Washington, D.C.: Biotechnol. Ind. Org.: 44–46.
- Charles T. O'reilly, Ron Solvason, and Christian Solomon. "Resolving the World's largest tides", in J.A Percy, A.J. Evans, P.G. Wells, and S.J. Rolston (Editors) 2005: The Changing Bay of Fundy-Beyond 400 years, Proceedings of the 6th Bay of Fundy Workshop, Cornwallis, Nova Scotia, Sept. 29, 2004 to October 2, 2004. Environment Canada-Atlantic Region, Occasional Report no. 23. Dartmouth, N.S. and Sackville, N.B.
- Pingree, R.D.; L. Maddock (1978). "Deep-Sea Research". 25: 53–63.
- To demonstrate this Tides Home Page offers a tidal height pattern converted into an .mp3 sound file, and the rich sound is quite different from a pure tone.
- Center for Operational Oceanographic Products and Services, National Ocean Service, National Oceanic and Atmospheric Administration (January 2000). "Tide and Current Glossary" (PDF). Silver Spring, MD.
- Harmonic Constituents, NOAA.
- Society for Nautical Research (1958). The Mariner's Mirror. Retrieved 2009-04-28.
- Bos, A.R.; Gumanao, G.S.; van Katwijk, M.M.; Mueller, B.; Saceda, M.M. & Tejada, R.P. (2011). "Ontogenetic habitat shift, population growth, and burrowing behavior of the Indo-Pacific beach star Archaster typicus (Echinodermata: Asteroidea)". Marine Biology. 158 (3): 639–648. doi:10.1007/s00227-010-1588-0.
- Bos, A.R. & Gumanao, G.S. (2012). "The lunar cycle determines availability of coral reef fishes on fish markets". Journal of Fish Biology. 81 (6): 2074–2079. doi:10.1111/j.1095-8649.2012.03454.x. PMID 23130702.
- Darwin, Charles (1871). The Descent of Man, and Selection in Relation to Sex. London: John Murray.
- Le Lacheur, Embert A. Tidal currents in the open sea: Subsurface tidal currents at Nantucket Shoals Light Vessel Geographical Review, April 1924. Accessed: 4 February 2012.
- "Do the Great Lakes have tides?". Great Lakes Information Network. October 1, 2000. Retrieved 2010-02-10.
- Calder, Vince. "Tides on Lake Michigan". Argonne National Laboratory. Retrieved 2010-02-10.
- Dunkerson, Duane. "moon and Tides". Astronomy Briefly. Retrieved 2010-02-10.
- "Do the Great Lakes have tides?". National Ocean Service. NOAA.
- Nurmi, P.; Valtonen, M.J. & Zheng, J.Q. (2001). "Periodic variation of Oort Cloud flux and cometary impacts on the Earth and Jupiter". Monthly Notices of the Royal Astronomical Society. 327 (4): 1367–1376. Bibcode:2001MNRAS.327.1367N. doi:10.1046/j.1365-8711.2001.04854.x.
- "tide". Oxford English Dictionary. XVIII (2nd ed.). Oxford University Press. 1989. p. 64.
- 150 Years of Tides on the Western Coast: The Longest Series of Tidal Observations in the Americas NOAA (2004).
- Eugene I. Butikov: A dynamical picture of the ocean tides
- Tides and centrifugal force: Why the centrifugal force does not explain the tide's opposite lobe (with nice animations).
- O. Toledano et al. (2008): Tides in asynchronous binary systems
- Gaylord Johnson "How Moon and Sun Generate the Tides" Popular Science, April 1934
|Wikiquote has quotations related to: Tides|
|Wikimedia Commons has media related to Tides.|
- NOAA Tides and Currents information and data
- History of tide prediction
- Department of Oceanography, Texas A&M University
- UK Admiralty Easytide
- UK, South Atlantic, British Overseas Territories and Gibraltar tide times from the UK National Tidal and Sea Level Facility
- Tide Predictions for Australia, South Pacific & Antarctica
- Tide and Current Predictor, for stations around the world |
Make a Game
Create a game you love, or invent a new one!
This Maker lesson:
Allow the students to interpret the Maker brief for themselves, or read it aloud to set the scene.
Make a Game
What games do you like to play? Do you play them inside or outside? How do you play them? What are the rules? How many people can play? Do you need a game board?
Make a game you love, or invent a new game.
2. Find a Problem
As a part of your preparation you might find some inspirational images, or illustrations to support your students in defining a problem from the Maker brief.
As students look at the Maker brief and any other supporting materials that you have collected, facilitate a discussion to steer them toward a problem. Once they have decided upon how they want to solve the problem, ensure that they record it on their worksheet.
Students should initially work independently, spending three minutes to generate as many ideas as they can to solve the problem. They can use the bricks from the set during the brainstorming process, or sketch out their ideas.
Students can now take turns sharing their ideas within their groups. Once all of the ideas have been shared, each group should select the best idea(s) to make. Be prepared to help facilitate this process to ensure that the students choose something that is possible to make.
Encourage diversity, not all student groups have to make the same thing.
Encourage the students to share their learning process. Provide them with the opportunity to share their thinking, ideas, and reflections using the documentation tool(s) they have available.
Note: To encourage maximum creativity, you may choose not to share this image with students.
4. Choose the Best Idea
Students must record up to three design criteria (three things their design must achieve) so that they can refer to it when they review and revise their solution.
5. Go MAKE
Students make one of the ideas using the LEGO® Simple Machines Set and other materials as needed.
Reinforce that students do not have to come up with the whole solution from the start. For example, they can investigate different parts of a solution or a specific function of an element separately before working on the overall solution as such.
During the making process, remind students to test and analyze their idea as they go, making improvements where necessary. This iterative process should happen several times as a natural part of the improvement of the solution. If you want students to submit their documentation at the end of the lesson, ensure that they record their design journey during the making stage using sketches and photos of their models.
6. Evaluate What You Have Made
Students test and evaluate their designs against the design criteria they recorded before they started making their solution.
Allow the students to select the tool(s) they find appropriate for capturing and sharing their reflections. Encourage them to use text, videos, images, sketchnotes, or another creative medium.
7. Present Your Model
Allow time for each student or student group to present what they have made to the class. A good way to do this is to set out a table large enough to display all of the models. If time is short, two groups can present to each other.
Students use the Maker self-assessment rubric to evaluate their design work. Each rubric includes four levels of achievement. The intention is to help students reflect on what they have done well and what they could have done better. Each rubric can be linked to engineering related learning goals from the NGSS.
9. Tidy Up
Ensure that you leave enough time at the end of the lesson to break the models down and sort them back into the LEGO boxes. You will need approximately 10 minutes to do this.
After completing this lesson, students will have:
Defined a clear design need
Developed their ability to iterate and improve design solutions
Developed their problem-solving and communication skills
Have used and understood the design process
Use craft materials that you already have in your classroom to add another dimension to this lesson. Some materials could be:
Recycled materials and objects from nature
Fabric or felt
Foam, pom-poms, or beads
Science and Engineering Practices
3-5-ETS1.1, 3-5-ETS1-2, 3-5-ETS1-3
Disciplinary Core Ideas
ETS1.B, (3-5-ETS1-2), (3-5-ETS1-3)
Common Core State Standards
RI.5.1, RI.5.7, W.5.8 |
Bacteriophage, We See You!
A century has elapsed since Frederick William Twort, a British microbiologist, noticed transparent glass-like spots formed of dead bacterial cells in colonies of micrococci. For a long time after their discovery, the research into bacteriophages was of a phenomenological character because of the lack of adequate experimental tools. The scientists had no possibility for studying in detail the specific features of an antibacterial impact caused by bacteriophages since they are invisible not only to the naked eye, but also to light microscopy. The advent of electron microscopy brought the study of viruses, including the viruses of bacteria, to a fundamentally new level
Electron microscopy studies suggest that bacteriophages are actually nanoorganisms rather than microorganisms, since their size does not exceed 100 nm. Also, it has been shown that bacteriophages are most diverse in their structure. This raises the question of their nomenclature. The very first classification, dating back to 1943, was based on the specific structural features described with the help of electron microscopy. One of its founders, Ernst Ruska, distinguished three types of bacteriophages, using their morphological characteristics (Ackermann, 2009).
According to the decision of the International Committee on Taxonomy of Viruses (ICTV), bacteriophages are viruses that specifically infect bacterial and archaeal cells. The set of characters used to identify bacteriophages at a species level includes the shape and size of the phage capsid, type of nucleic acid (DNA or RNA) forming its genome, and the presence or absence of an envelope
The currently used systematics of bacteriophages, developed in 1967, relied on the classification that comprised six morphotypes. However, the discovery of new bacteriophages has added new families, genera, and species to this systematics. The advances in molecular biological tools established additional criteria for their classification which took into account the type of nucleic acid and/or protein composition within the phage.
The use of state-of-the-art molecular techniques in the bacteriophage research has revealed manifold specific features of these interesting organisms. In turn, bacteriophages turned out to be a most useful methodological tool for molecular biologists (Brussow, 2013).
If there is a head, there is a tail
Actually, bacteriophages have a comparatively simple structure: each virus is a complex of a nucleic acid and proteins packaged in a specific manner. They may be curiously shaped but approximately 96 % of all known bacteriophages have a “tailed” phenotype (Matsuzaki et al., 2005). They have a “head” shaped as icosahedron (a protein reservoir housing packaged nucleic acid) and a “tail,” a protein structure carrying the elements that are able to bind stably to receptors (specific proteins or polysaccharides) on the surface of bacteria. Individual species of “tailed” bacteriophages differ in the size of their “head” as well as in the size and fine structure of their “tail.”
A bacteriophage species is identified according to its ultrastructural characteristics, which are described using the negative contrast technique. Any phage-containing suspension can be used as a sample, be it water from a natural source, animal intestinal lavage specimens, or bacterial cell suspension after incubation with a bacteriophage in a laboratory. A drop of prepared suspension is covered with a special copper grid with a polymer film on it for the bacteriophages to sorb. The grid is then treated with a counterstain (usually uranyl acetate or phosphotungstic acid) that encompasses bacteriophage particles and creates a dark background, rendering bacteriophages, which have low electron density, visible in the electron microscope.
Over 6300 bacteriophages have been described by electron microscopy so far (Ackerman and Tiekotter, 2012; Ackermann and Prangishvili, 2012). It emerges that not all bacteriophages have clearly distinguishable “head” and “tail”; as for their hereditary matter, the phages with double-strand DNA are the most abundant. The systematics of bacteriophages is very dynamic since novel phages are constantly being discovered (Ackermann, 2007).
Hunting for bacteria
Advances in electron microscopy techniques made it possible to visualize not only bacteriophages, but also their reproduction. The penetration of “tailed” bacteriophages into the cell has been studied in most detail, including the molecular mechanisms underlying the “injection” of phage DNA into the cytoplasm of a bacterial cell (Guerrero-Ferreira and Wright, 2013).
The typical behavior of bacteriophages “attacking” a bacterium is demonstrated by a lytic phage. The phage attaches itself to the bacterium surface, using its receptors as an “anchor.” Then its “tail” pierces the bacterial wall with the help of specialized proteins to form a “channel” for the phage nucleic acid to pass into the cell. Within half hour, the protein and nucleic acid components ofthe bacteriophage are synthesized in the infected bacterial cell to assemble new phage particles. Then the cell is destroyed to release mature virions.
The Prokaryote Kingdom (the organisms lacking the nucleus) comprises Bacteria and Archaea. The members of these subkingdoms differ in the structure of the cell wall, in the specific features of their vital activities, and in the degree of resistance to environmental factors (the majority of archaea live under extreme conditions). Although the archaeal viruses found so far are rather few, their morphological diversity considerably exceeds that of bacteriophages. Some of the archaeal phages are tailed, as is typical of bacteriophages, but the overwhelming majority of archaeaphages have unique morphotypes, including virions shaped as ellipsoids, spindles, drops, and bottles, either tailless or carrying two tails, as well as spherical and rod-like virions. The known morphological diversity of archaeaphages is likely to be only the tip of an iceberg.
The unique characteristics of archaeophages along with the three cell lineages found on the planet—bacteria, archaea, and eukaryotes (the organisms possessing a cell nucleus)—suggest that there exist three specific virus “domains” that have emerged in the course of a long-term coevolution of viruses and their hosts, although some viruses still retain the traces of their common origin (Pina et al., 2011)
The combination of counterstaining and ultrathin sections* makes it possible to trace all stages in bacteriophage reproduction including the sorption of phage particles on the surface of bacterial cells, their penetration into the cell, and copying. Unfortunately, this approach is much less developed as compared with bacteriophage visualization and identification by counterstaining. And yet, the ultrastructural characteristics of the bacteriophage life cycle are helpful in assessing the efficiency of elaborated phage therapies satisfactorily.
Bacteriophages are undoubtedly a unique phenomenon on the planet: on the one hand, they have a rather simple structure; on the other hand, they are tremendously diverse both in terms of their morphology and potential “victims.”
These nanoorganisms are not just safe for us – they are “friendly” because they can kill pathogenic bacterial cells without affecting any cells of higher organisms, including the cells of humans, agricultural animals, and plants. This property allows us to use bacteriophages for treating bacterial infections following the principle that the enemy of our enemy is our friend.
Phage therapy has agreat potential not only because phages can kill bacteria, but also because of a high specificity of the phage–host interaction. Finally, since phages are a natural phenomenon, we can affect pathogenic bacteria avoiding harmful chemical agents.
*To obtain ultrathin sections, cells are embedded into a special resin; the resulting hard blocks are cut into sections 60–80 nm thick, using an ultramicrotome equipped with a glass or diamond knife
Ackermann H. W., Prangishvili D. Prokaryote viruses studied by electron microscopy. 2012. N. 157. P. 1843—1849.
Ackermann H. W., Tiekotter K. L., Murphy’s law – if anything can go wrong, it will // Bacteriophage. 2012. N. 2:2. P. 122—129.
Bacteriophages methods and protocols / Ed. A. M. Kropinski, R. J. Clokie. Humana Press, 2009. V. 1.
Duckworth D. H. Who discovered bacteriophage? // Bacteriological reviews. 1976. V. 40. N. 4. P. 793—802.
Introduction: a short history of virology // Viruses and man: a history of interactions / Ed. M. W. Taylor. Springer, 2014. P. 1—21.
Krylov V. N. Phage therapy in therms of Bacteriophage genetics: hopes, prospects, safety, limitation // Rus. J. of genetics. 2001. V. 37. N. 7. P. 869—887.
Matsuzaki S., Rashel M., Uchiyama J., et al. Bacteriophage therapy: a revitalized therapy against bacterial infectious deseases // J. Infect. Chemother. 2005. N. 11. P. 211—219.
The photos are by the courtesy of the authors, and drawings, by Zhenya Vlassov |
Bipolar disorder is defined by specific mood episodes on opposite poles of the mood spectrum: “manic,” or elevated, and depressed. These mood episodes come with many additional symptoms beyond changes in mood, including altered cognition, sleep and behavior.
Despite the colloquial use of the term “bipolar” and references to the condition in popular culture, bipolar disorder is often misunderstood by the public. While the disorder does feature alternating moods, it is not simple moodiness or unpredictability. Bipolar disorder is defined by specific mood episodes on opposite poles of the mood spectrum: “manic,” or elevated, and depressed.
These mood episodes come with many additional symptoms beyond changes in mood, including altered cognition, sleep and behavior. Bipolar disorder is a severe mental health condition that sometimes requires inpatient treatment to address life-threatening risks and to stabilize people so that in the future they can manage the condition on an outpatient basis.
What Is Bipolar Disorder?
Psychiatrists have been diagnosing and treating bipolar disorder since the mid-1800s. The condition features alternating mood episodes. It was originally called “manic depression,” a term that persists to this day in popular culture and song, though clinicians no longer use it as a formal diagnosis.
Bipolar disorder is defined by the presence of at least one distinct mood episode. While most people with bipolar disorder have depressive or mixed episodes, the Diagnostic and Statistical Manual of Mental Disorders (DSM) only requires people to have a history of manic episodes to receive the diagnosis.
Manic and depressive episodes cause holistic changes to thinking and behavior. People who are depressed have less mental and physical energy, while people who are manic or hypomanic have more.
The sudden intensity of a manic episode can be unpleasant and dangerous. Manic episodes are sometimes severe enough to lead to hospitalization, especially when they include psychotic features like delusions or hallucinations.
Some of the people that originally defined bipolar disorder in the late 19th and early 20th century viewed it as a spectrum disorder, with there being stages of bipolar disorder rather than specific or discrete types of bipolar disorder.
While this concept is not reflected in the way bipolar disorder is diagnosed, it can be helpful for some people who are trying to better understand their experience with the disorder. People who have more depressive episodes or symptoms and therefore fall on the depressive end of the spectrum may need different interventions than do people who frequently experience and need to manage manic or hypomanic symptoms.
Types of Bipolar Disorder
People can be confused by the different types of bipolar disorder and what is meant by terms like manic bipolar disorder, mild bipolar disorder and bipolar I versus bipolar II.
For many people, these distinctions aren’t important. However, it can help to know that bipolar I disorder is more severe than bipolar II disorder and that the difference between them relates to the severity of associated manic episodes. Severe depressive episodes carry risks, but manic episodes are much more dangerous than hypomanic episodes.
Bipolar I disorder is the most severe form of bipolar disorder. It is defined by the presence of manic episodes. There is no requirement regarding depressive episodes. However, most people with bipolar I disorder experience both manic and depressive episodes.
Severe manic episodes are associated with many risks. These include impulsive behavior like reckless driving, excessive spending and unsafe sex. Sometimes people who are manic also experience psychosis. They may start having auditory hallucinations, such as “hearing voices,” or develop grandiose delusions.
Someone who has bipolar I with psychotic features may believe that they are imbued with supernatural powers and act on those beliefs in ways that put them or others at risk of harm. People experiencing extreme manic symptoms usually require a brief period of inpatient hospitalization until they mentally recover to the point they no longer experience psychosis.
For people to be diagnosed with bipolar II disorder, they must have experienced at least one hypomanic episode and at least one major depressive episode. While it is possible for them to experience psychosis during either kind of mood episode, people with bipolar II disorder are much less likely than people with bipolar I disorder to have a psychotic break.
Hypomanic episodes affect people’s functioning but not to the same degree as manic episodes. People do not require hospitalization for hypomanic episodes. Many people with bipolar II disorder experience them as positive periods of increased energy and creativity.
The pleasurable aspects of hypomanic episodes are what cause many people with bipolar disorder to avoid taking medications that control their disorder. However, hypomanic episodes often also have unpleasant aspects like racing thoughts and agitation.
People with bipolar II rapid cycling type typically experience more mood episodes per year than people with other types of bipolar II disorder.
When considering a diagnosis of cyclothymia versus bipolar disorder, the main distinguishing factor between the two is severity. Cyclothymia is even milder than bipolar II disorder. While people with bipolar II disorder have full episodes of hypomania and major depression, people with cyclothymia only have symptoms of hypomania and depression.
People with bipolar cyclothymia have never had a full mood episode as defined in the DSM. They may not even recognize they have an underlying disorder and might experience their shifts of mood as seasonal or contextual. While untreated cyclothymia is less severe than untreated bipolar I or bipolar II disorder, it can affect relationships and work. It can also drive people to use substances to try to regulate their moods.
People who experience a bipolar mixed episode are at increased risk for impulsive and self-destructive behavior, including self-harm and suicide attempts. While mixed state bipolar disorder might seem like a contradiction, it is actually relatively common for people who are depressed to experience sudden bursts of energy that make them feel irritable or agitated. Similarly, people experiencing manic symptoms can sometimes start feeling hopeless.
What makes mixed episodes more dangerous is that people who are severely depressed often lack the energy or motivation to act on suicidal ideation, while people in a mixed state are more likely to act on depressed or hopeless thoughts. Episodes with mixed features can also include psychotic symptoms that make them even riskier and more likely to require hospitalization.
Some people experience bipolar rapid cycling type, also known as rapid cycle bipolar disorder. It is important to understand that the term “rapid cycling” does not mean a person has sudden shifts in mood from one day to the next or mood swings throughout the day. That kind of mood lability typically indicates another disorder, such as borderline personality disorder. In bipolar cycling, people alternate between mood episodes that last a few weeks or one or two months instead of typical bipolar episodes that last for several months each.
Symptoms of Bipolar Disorder
Signs of bipolar disorder can be subtle at first. Some people may experience alterations to cognition, such as bipolar memory loss before they experience any mood symptoms. They might notice they are having difficulty recalling words or remembering plans before more distinctive symptoms of depression or mania start to occur. Early bipolar signs often include disruptions in sleep and increased irritability.
Episodes of what some people with the disorder called “bipolar anger” are often a precursor to a full bipolar mood episode. As brain chemistry shifts, increased irritability can spur people to snap or lash out at others.
Bipolar rage attacks can occur in the context of depression or mania. Research shows that increased anger and aggression in bipolar disorder is independent of the polarity of the current mood episode but is more common in acute mood episodes, especially when psychotic features are present.
While some bipolar symptoms are independent of mood episodes, most of the time, symptoms of bipolar disorder are indicative of one of three mood states: depression, mania or hypomania.
- Depression – Bipolar depression symptoms are generally the same as symptoms of major depressive disorder, except in cases where mixed features are present. While worsening mood and hopeless thoughts are typically associated with depression, the onset of a depressive episode is often first signaled by a loss of energy and changes to appetite, sleep and cognition. People who are depressed have more difficulty making decisions or maintaining the motivation to pursue personally important activities.
- Mania – The onset of bipolar mania is often indicated by increased energy level and mental activity. In the early stages of a manic episode, people may notice that their thoughts race from topic to topic. They may develop sudden new interests that progress to compulsive behavior as the manic episode develops. For example, a new interest that develops in the early phase of a manic episode can lead to someone spending thousands of dollars on items related to that interest in the acute phase of a manic episode.
- Hypomania – The symptoms of hypomania are similar to the symptoms of mania, just more subtle. Many people experience feelings of inspiration at the onset of a hypomanic episode. They may feel more creative or adventurous and decide that they want to start a major new project. Pleasant symptoms like these cause some people to embrace their bipolar disorder even if it means accepting less pleasant episodes or symptoms. It’s important to note that not all hypomanic symptoms are benign — people can often feel irritable, anxious or out of control when they are hypomanic.
Causes of Bipolar Disorder
Some people may ask, “Is bipolar genetic?” or “Is bipolar hereditary?” The disorder does have strong genetic components. Research published in Scientific American indicated that someone with a parent who has bipolar disorder is six times more likely than others to struggle with the condition. The risk is even more pronounced if both parents have bipolar disorder.
The most powerful bipolar disorder causes are biological differences that can be inherited. Research suggests that bipolar disorder is caused by changes in brain structure and function, including changes in levels of the neurotransmitters serotonin, dopamine and norepinephrine.
While environmental factors cannot cause bipolar disorder, they can increase the risk that a person will develop the disorder. They can also trigger its onset, causing people to experience bipolar symptoms earlier than they would have otherwise. Bipolar triggers include stress, sleep deprivation, trauma and substance abuse.
How Is Bipolar Disorder Diagnosed?
As with any other mental health diagnosis, professionals make a bipolar disorder diagnosis through a series of clinical interviews. They may use a screening tool or bipolar test with targeted questions to determine if a person meets DSM criteria for a bipolar diagnosis.
Some people might be diagnosed with bipolar disorder for the first time during an episode of inpatient treatment. People with bipolar I disorder are often hospitalized after their first manic episode. Hospital staff makes observations over several days to determine the most appropriate diagnosis for people who exhibit both psychotic symptoms and elevated mood.
It can be harder to diagnose bipolar disorder when a person is experiencing a depressive episode. Mental health professionals engage in a careful process of differential diagnosis to determine if a person who has symptoms of depression has also ever had symptoms of mania or hypomania. They must determine whether a person experiences the distinct mood episodes that indicate bipolar disorder or shorter-term mood instability, which may indicate another kind of disorder, such as a personality disorder.
Who Is at Risk of Bipolar Disorder?
The risk factors for bipolar disorder include the following:
- A family history of bipolar disorder or other mood disorders
- Early experiences that affect brain function, such as head trauma
- Childhood trauma or growing up in an unstable or violent home
- Substance abuse in late childhood or early adolescence
- Sleep deprivation, especially when chronic
- Long periods of elevated stress
Substance abuse can cause changes in the brain that can trigger the onset of bipolar disorder. However, it is unlikely that anyone without genetic or biological risk factors will develop the disorder, even when several environmental risk factors are present.
Bipolar Disorder Statistics
About 3 percent of the United States population has bipolar disorder in a given year. Bipolar disorder usually emerges for the first time in early adulthood but can also arise in adolescence. People with bipolar disorder suffer from the most severe functional impacts of any mood disorder, with 83 percent of people experiencing serious impairment.
Additional bipolar statistics include the following:
- About 65 percent of people with bipolar disorder have a manic or hypomanic episode before they have a depressive episode.
- From 25 to 50 percent of people with bipolar disorder attempt suicide at least once.
- Suicide is the cause of death for over 15 percent of people with bipolar disorder.
- This means that the bipolar suicide rate is 25 times higher than the suicide rate in the general population.
If people spend years dealing with untreated bipolar symptoms, they are much more likely to experience increasingly worse mood episodes until they receive treatment. They are also at an increased risk of developing co-occurring conditions like substance use disorders.
If you or someone you know grapple with untreated bipolar disorder along with an addiction to drugs or alcohol, help is available. The Recovery Village operates treatment centers across the United States that provide integrated treatment options for people with comorbid mental health and substance use disorders. Contact a representative at The Recovery Village to learn about specialized treatment options and how you can get started on the road to recovery.
The Recovery Village aims to improve the quality of life for people struggling with substance use or mental health disorder with fact-based content about the nature of behavioral health conditions, treatment options and their related outcomes. We publish material that is researched, cited, edited and reviewed by licensed medical professionals. The information we provide is not intended to be a substitute for professional medical advice, diagnosis or treatment. It should not be used in place of the advice of your physician or other qualified healthcare providers. |
An eighteenth-century map of the island of Saint Vincent, surveyed in 1773. Saint Vincent had been a French colony until 1763, when it was ceded to Britain with the Treaty of Paris. Britain immediately launched a military surveying campaign on the island, although the survey was not completed until 1773. The British viewed the island as a potential center of sugar production. They demanded that the indigenous Carib and Black Carib population sell their land to representatives of the British colonial administration. The natives’ refusal resulted in a full-scale military assault on the Caribs, with the objective of capturing and deporting them from the island. (That was eventually achieved in 1796, when 5,000 Caribs were deported from Saint Vincent to the tiny island of Baliceaux near Bequia, and again in 1797, when another 5,000 were exiled to Roatán.) Meanwhile, in 1773 the two sides signed a peace treaty which ended the conflict, the First Carib War (1769–1773), and outlined the British and Carib territories on the island. The final survey was used to make the current map, which was originally produced in 1775 and translated in French by Georges-Louis La Rouge in 1778. It was one of many maps and charts or the region, to be used by the French Navy during the American War of Independence. A year later, in 1779 France, assisted by the local Black Caribs, re-captured Saint Vincent, which was eventually returned to Britain four years later with the Treaty of Versailles (1783).
Original English map title: St. Vincent, from an Actual Survey made in the year 1773, after the Treaty with the Caribs… This Island of St. Vincent is 18 Miles 1/8 Long, and 11 Miles 1/5 Broad, has 22 Rivers, capable of turning Sugar Mills, and contains 84,286 Acres. French map title: Isle St. Vincent Levée en 1773, aprés le Traité fait avec les Caraibes. Traduit de L’Anglais. A Paris Chez le Rouge rue des Grands Augustins 1778. Avec Privilege du Roi. (Shown frames are not included.)
• 1778 map of St. Vincent, surveyed in 1773, G. L. Le Rouge (after Jefferys), Revolutionary War Era
• Fine Art Premium Giclée (Gouttelette) Print (100% cotton 340 gsm fine art paper)
• Made in USA |
- Around a third of all food produced is lost or wasted.
- Food security depends on the climate - which is changing.
- In West Africa, crop yields are projected to fall by 20 to 40%, and potentially more.
Food is fundamental to the efforts to tackle climate change, according to a scientist who has spent decades tracing the interactions between global warming and what we eat.
Cynthia Rosenzweig, head of the Climate Impacts Group at NASA Goddard Institute for Space Studies, was Thursday awarded the prestigious World Food Prize for her research.
That includes stark warnings about the potential effect climate change will have on food.
How do food systems drive climate change?
Climate change cannot be restrained without attention to greenhouse gas emissions from food systems. Our work, among others, shows that those food system emissions are approximately one third of total human emissions. We're not going to be able to solve climate change unless these are taken into account.
At the same time, food security for all is dependent on the changing climate.
As we move into this crucial decade of action on climate change, food needs to be at the table.
What are the climate impacts on food?
High temperatures in general are detrimental to crops, because they speed them through their growing period, so they have less time to make the grain. So this is a very big downward pressure on yield. Then we have extreme events affecting the critical growth stages, for example, a heatwave happening during pollination in maize. Those extreme events are already increasing in frequency, duration and intensity in many farming regions around the world.
Then of course water is absolutely critical for food production. Climate change is projected to change — and is already changing — the hydrological cycle in many agricultural areas, with increased drought as well as heavier downpours because the warmer air holds more water vapour.
We can already see tremendous impacts of drought in the developed world, for example, in California since the 2000s. In the developing world, there isn't as much breeding for heat and drought tolerance in farming, there isn't as much work on pests. This increases tremendously the vulnerability of the world's 500 million smallholder farmers.
You founded the Agricultural Model Intercomparison and Improvement Project. What does it do?
There used to be different modelling groups around the world, all working very diligently to develop different crop models. But people would be using different climate scenarios to test climate change impacts -- and the results weren't comparable. So at the heart of AgMIP is improving the rigour of the projections by developing common protocols so that the results from agricultural models can be compared. We do crop modelling, livestock modelling, pest modelling and economic modelling and we always bring in the latest climate scenarios.
Therefore we are able to say in a very clear way: here's the mean of the model results and here's the range of the projections. Then decision makers, both at the global scale but also in individual countries, have the evidence base that they need to respond to climate change effectively.
With the latest climate scenarios, AgMIP's Global Gridded Crop Modelling Team found that the emergence of impacts on some of the agricultural regions around the world is now projected to be felt earlier, to really start biting even in the 2030s. That's really soon.
Some of the key areas with these earlier impacts are parts of the US Midwest, Western Africa and East Asia. In West Africa, crop yields are projected to fall by 20 to 40 percent, and potentially more.
What changes could help cut emissions?
Increasing carbon storage can help to fight climate change. We need to increase efficiency for crop production and reduce food loss and waste -- it's a rough figure but around a third of all food produced is lost or wasted. If we don't waste as much food, we don't have to grow as much food -- thereby reducing greenhouse gas emissions from agricultural production.
In developed countries, there's definitely the potential for dietary choices to make an impact, because animal-based emissions, especially from beef and dairy, are significant. But as we think about consumption, we have to start by saying that all solutions are context specific and they have to take into account equity issues. There are many people in the world who don't have food choices.
Are perceptions changing?
Yes. I interact with so many different groups in all different parts of the food system, from the production side, supply chain side, retail, packaging, everything. There is definitely a movement towards transformation going on in the food system.
Food is the fundamental climate impact sector and connects everyone on the planet to climate change. We need to transform the food system, so that it delivers food security for all, as well as a healthy and sustainable planet.
Comments have been edited slightly. |
For 2022-2023, we will offer participating schools access to our 34 book units. Click HERE to see a full list of our novel units. We believe that five book units will provide for a full year’s curriculum consisting of carefully planned daily lessons that are challenging, knowledge rich, and writing-intensive. This year, participating schools may choose to pilot any of the novel units to create their ideal scope and sequence.
We believe that building strong lifelong readers is rooted in a novel centric curriculum with frequent and varied opportunities for reading and writing. Each curriculum unit is knowledge driven building background knowledge in both the short and long term. From a short-term perspective, we infuse our lessons with directly useful knowledge to increase how much students understand of and learn from what they read. From a long-term perspective, we seek to build a strong base of knowledge that increases student understanding of important ideas and concepts that they will encounter in the future. Within each unit, we identify key vocabulary to support plot comprehension and depth of word knowledge as well as technical vocabulary, like juxtaposition and motif, to support student analysis of literature. We read in three different ways ensuring the text is at the center of every lesson with opportunities for regular practice at close reading in small high quality doses. We follow reading with developmental, formative, and summative writing prompts ensuring students get support in sentence craft, idea formation and well-developed arguments about a text. Each unit contains:
- A unit plan outlining key understandings, background knowledge, summative writing options, future readings, and a pacing calendar with daily objectives and content.
- Lesson plans with daily objectives, teaching techniques with suggested teacher scripts, means of participation and key understandings.
- A knowledge organizer that specifies and organizes background knowledge that is critical to understanding the unit.
- Daily student work packets with the following items.
- Embedded non-fiction. These extensive series of shorter non-fiction texts are designed to inform students about key topics that can enrich or inform the primary text with depth and nuance.
- Embellishments or smaller artifacts—pictures, diagrams, mini-texts—that provide background about key references in the text.
- Knowledge-based questioning and knowledge feeding. These questions about the text are often written to reinforce knowledge as much as skills. We often use a question as an opportunity to share knowledge with students and ask them to apply it right away.
- Vocabulary instruction with both explicit and implicit vocabulary words. Each word taught explicitly is paired with an image that expresses it. Teachers receive a folder of these images for display on classroom walls to help support students’ ability to use the words frequently in their discourse.
- A mid-unit and end of unit assessment are provided for each novel consisting of plot and content knowledge, line analysis, and passage analysis.
- Supplementary assignments to support student understanding with summative and creative writing tasks. |
This is a great starter activity to get children hooked into writing and learning how to appreciate and be involved in other cultures from across the globe. Students use an iPad app to see a panorama from somewhere in the world and can write whatever they like. Read on to find out more…
Australian Curriculum Links:
- Plan, draft and publish imaginative, informative and persuasive texts containing key information and supporting details for a widening range of audiences, demonstrating increasing control over text structures and language features (ACELY1694)
- Understand how noun groups/phrases and adjective groups/phrases can be expanded in a variety of ways to provide a fuller description of the person, place, thing or idea (ACELA1508)
- Explain sequences of images in print texts and compare these to the ways hyperlinked digital texts are organised, explaining their effect on viewers’ interpretations (ACELA1511)
- Plan, draft and publish imaginative, informative and persuasive print and multimodal texts, choosing text structures, language features, images and sound appropriate to purpose and audience (ACELY1704)
- Investigate how vocabulary choices, including evaluative language can express shades of meaning, feeling and opinion (ACELA1525)
- Reflect on ideas and opinions about characters, settings and events in literary texts, identifying areas of agreement and difference with others and justifying a point of view (ACELT1620)
- Explain to the children that today for writing we are going to find out some more about other countries and then write whatever we want about that country.
- Model using the iPad app ‘TOURWRIST’ by pulling up a panorama of somewhere in the world.
- Now model what you could do as a writer about the scene that you have just witnessed. For example, you could be looking at the Eiffel Tower which may create the thought of ‘How did I get here?’ Then you begin a mystery. Another example might be the Pyramids of Giza, where you could say something like ‘We were about to begin the next part of our adventure’. The boundaries really don’t exist. It may be as simple as working on adjectives and the children describe what it is that they see.
- Now that the children have the idea of what is required of them, ask them to use TourWrist to get a cultural picture of somewhere in the world.
- Ask them to list down some of the things that they see in their writing books. (This could include landscapes, buildings, significant landmarks and more)
- Once they have an understanding of the country, ask the students to now write about whatever they want using 1 of the panoramas that they have seen.
- Of course, you would set your own goals and criteria before they go, but it’s not really a necessity as this lesson is about culture and getting kids to write.
- When finished their writing (or after a solid block of time), ask all children to come back to the floor.
- Ask them to show their panoramas, tell the rest of the class what they feel life is like in that country and then get them to read what they have written.
- Ask the other students to give a rating out of 10 for how well the student captured the culture of the country.
- Collect work samples for moderated assessment.
- Use success criteria for students to self-assess themselves at the end of the lesson.
- iPads (1 between 2 or 1 each if you have enough)
- TourWrist App available from the iTunes Store
- Writing Books
Here are some samples of panoramas that you could use:
If you like this lesson, or have an idea to improve it, please consider sharing it on Twitter and Facebook or leave a comment below. |
As the snow melts and spring gets closer, wildlife biologists and technicians prepare for another season of prescribed burning. A lot of time and care goes into each plan of a prescribed burn. First they study their records and the sites to see what areas need to be burned and are feasible to burn during the upcoming spring season. Next they create burn plans that layout firebreaks, points of concern, what weather conditions would create the "perfect" burn conditions, number of people needed, equipment needs, and the list goes on. Once the plan is written, it has to be reviewed and approved. When approved they create the firebreaks, get equipment ready, and train personnel. Then they wait for the conditions to be right.
Why are prescribed burns done on the prairies? Before the European settlers came to the prairies, fire was a natural occurrence. Lightening strikes would occur and burn a couple of acres up to hundreds of acres. Fires were also started by Native Americans to drive game, encourage growth of nuts, berries, seeds, etc., for food, improve pasture for domestic and wild animals, protect villages and camps, and to make travel easier. When the settlers arrived they tried hard to work the land, but it wasn't until the invention of the steel plow that the settlers were successful in breaking the sod. With this success came the disappearance of the prairies and the disappearance of the fires that the remaining prairies needed.
Managing prairies with prescribed burns has many benefits. Unburned prairies leave a mantle of dead and decaying vegetation. This stifles the growth of the prairie plants and deprives plants of space and light. In a study comparing an area burned in March or April after the snows melt to an area unburned for 25 years, there was a three- to four-fold increase in forbs (flowers) for 1 to 2 growing seasons. Plant diversity after a burn increases for 6 to 7 years and then the stifling growth once again occurs, but annually burned grasslands also create small plant diversity. This is why burns commonly are done every 3-5 years. The burn also helps release the nutrients in the dead vegetation so it can be used by the new growth. The blackened soil heats up fast by absorbing solar energy, thus stimulating speedy seed germination, sprouting and growth. Burning also helps in controlling shrubs that are invading into the prairie. Without burning, many of the prairies would eventually turn in forests.
Prairies are an important part of the ecosystem. Prairies have an abundance of plants, insects, birds, mammals, and reptiles, many of these found only in prairies. Warm season grass prairies provide excellent cover for wildlife because they hold up better under severe winds, snow, etc. The grasses and forbs also provide seed and nectar for insects, butterflies, birds, and small mammals. But to get these benefits, fire must occur to restore the prairie.
Cleaning the stream water is vitally important to the fish and wildlife found throughout the preserve. After the water leaves the preserve, it is vital to those that live downstream.
Coffee Creek Watershed Preserve
Northeast corner of I-80/90 and IN 49
2401 Village Point, Chesterton, IN 46304
178 E Sidewalk Rd, Chesterton, IN 46304
Coffee Creek Watershed Conservancy, Inc
(the non-profit organization)
PO Box 802
Chesterton, IN 46304 |
Thermal expansion of the Earth's crust necessitates geoengineering
Could global warming cause thermal expansion of the Earth’s crust? In areas prone to seismic activity, this could cause submarine earthquakes and landslides. Such shockwaves could in turn disturb methane hydrates that are already more vulnerable due to temperature rises of the ocean. When large amounts of methane are abruptly released in shallow waters, there’s little opportunity for oxidation as the methane rises in the water, so the methane can enter the atmosphere largely unchanged, while hydroxyl depletion can extend methane’s lifetime in the atmosphere. Abrupt releases of large amounts of methane would dramatically accelerate local warming, necessitating geo-engineering.
Note: This page was originally posted at knol in 2011 and, when knol was discontinued, preserved here for archival purposes.
Earthquakes appear to be on the increase. The image below pictures the total strength of earthquakes with a magnitude of 6 or higher.
Thermal expansion of the Earth's crust due to global warming can disturb submarine methane hydrates, which can result in the release of huge amount of methane. While in many cases the methane will be broken down in the water before it reaches the surface, in the shallows of the Arctic such releases could quickly enter the atmosphere, resulting in Runaway Warming.
Are there indications that large methane releases are already taking place in the Arctic? If so, they would cause significant increases in temperature. One way to check this is by looking at temperature anomalies.
Temperature Anomalies in the Arctic
The following image shows November 2010 temperature anomalies against base reference of 1951-1980.
The above image is a screenshot from the NASA GISS November 2010 data.
The image below is a screenshot from above text file:
The above image shows that there was a temperature anomaly of 12.4592 degrees Celcius (against 1951-1980) for the following locations:
___November_2010_____________ L-OTI(^S^o^N^C) Anomaly vs 1951-1980
i j lon lat array(i,j)
51 81 -79.00 71.00 12.4592
52 81 -77.00 71.00 12.4592
53 81 -75.00 71.00 12.4592
54 81 -73.00 71.00 12.4592
To find out where exactly these locations are, the longitude and latitude data are entered into Google Maps,
as shown on the screenshot below for latitude 71.00 and longitude -79.00, which are for Baffin Island in Canada.
Note that there are a lot of missing data in the Arctic, as shown on the map at the top. Furthermore, the data are monthly data. For specific moments and specific locations, the anomaly can be even more striking. As an example, on January 6, 2011, temperature in Coral Harbour, located at the northwest corner of Hudson Bay in the province of Nunavut, Canada, was 30°C (54°F) above average, according to:
The question is: How can such large temperature anomalies be explained?
Furthermore, as the sea ice expands at the end of the summer season, a hole in the ice appears to form around an area in the Laptev Sea. It appears there is an area around which the sea ice forms later than the surrounding areas. Why is this the case?
Could it be that methane is bubbling up in this area from the seabed?
Katie Walter Anthony once observed that methane hotspots in lake beds can emit so much methane that the convection caused by bubbling can prevent all but a thin skin of ice forming above, leaving brittle openings the size of manhole covers even when the air temperature reaches -50 degrees C in the dark Siberian winter.
This hole can be observed in October 2010 (animation top left) and end-October to early-November 2011 (animation bottom left), based onUniversity of Illinois data.
This hole in the sea ice appears to be another anomaly, raising further questions as to why these anomalies occur in the Arctic.
Possible causes of these anomalies in the Arctic
1. Nitrous oxide?
These anomalies cannot be explained through nitrous oxide emissions (even though they do occur increasingly in wetlands in the ESAS), since nitrous oxide is mostly released closer to the equator.
2. Albedo change?
Could soot and melting ice that combine to cause albedo change perhaps be responsible all this heat? Shindell, in a 2009 presentation, puts the RF of soot (black carbon) at 0.44 watts per square meter.
Albedo change is expected to add merely 0.3 watts per square meter over the entire land and water surface of the planet, says Stephen Hudson of the Norwegian Polar Institute, who bases his calculation on the Arctic having no ice for one month and decreased ice at all other times of the year.
Most carbon dioxide does not originate in the Arctic, so this cannot explain the extreme temperature anomalies in the Arctic. While high levels of carbon dioxide do occur in the Arctic, the high initial Global Warming Potential (GWP) of methane make it a more likely candidate.
Could it be that these temperature anomalies and this hole in the sea ice are all caused by warmer river water entering the Arctic? It could be, yet if that was the case, why then did this hole not form closer to the coast?
This compares to 1.6 Watt per square meter for CO2, as well as 1.6 Watt per square meter for total net antropogenic forcing. Given its high initial GWP, methane could (at least partly) explain these Arctic anomalies.
What type of methane is most likely to be behind these Arctic anomalies? Much methane originates from cows, rice fields, etc. In such cases, it does not originate in the Arctic, so this type of methane cannot explain anomalies in the Arctic as on the image right. Similarly, methane from wetlands and from transport of and drilling for natural gas would predominantly cause anomalies on land, rather than over the Arctic Ocean.
Similarly, growth in concentrations of phytoplankton can hardly explain the prominence of methane in the Arctic. As the image below shows, concentrations of chlorophyll in the Atlantic Ocean in between Europe and Canada are similar to concentrations in open Arctic waters, yet methane on the image right is most prominent over the Arctic Ocean.
Furthermore, phytoplankton is an unlikely candidate to explain the methane showing up on the image further below by Shakova and Semiletov, since phytoplankton would be much more likely to spread out over a wide area, instead of being so highly concentrated in a single point.
Methane is already more prominent in the Arctic
There are multiple lines of evidence indicating that methane is present in higher concentrations in the Arctic than elsewhere. The images below are produced from the Giovanni NASA AIRS database at http://daac.gsfc.nasa.gov/giovanni/ showing 2006-2009 annual mean upper troposphere (359Hpa) measurements, from Giovanni NASA GES DISC showing surface measurements averaged over the period March 3, 2010, to December 27, 2010 and satellite measurements of methane at the north pole in April 2009.
The NOAA image below further confirms that, the higher the latitude, the more prominent the presence of methane.
The HIPPO image left, based on data collected by airplane, also indicates that more methane is present in the Arctic than anywhere else on Earth, particularly at surface levels.
This suggests that methane is bubbling up, likely originating from submarine hydrates and is spreading over the globe as it reaches the stratosphere.
In conclusion, methane from submarine hydrates is the most likely cause of temperature anomalies in the Arctic, in lack of alternative causes.
As said, the fact that methane is already bubbling up from submarine sediments is further supported by the image below, from Shakova, N. and Semiletov, I., Methane release from the East Siberian Arctic Shelf and the Potential for Abrupt Climate Change, Presentation, November 30, 2010.
This post starts with the suggestion that global warming causes thermal expansion of the Earth's crust, which in turn causes seismic activity that can disturb methane hydrates and trigger large abrupt releases of methane in the Arctic.
In the post Methane linked to seismic activity in the Arctic, methane emission points have been pinpointed with temperature as indicator, using the method described at the top of this page. As that post shows, the emission points thus found can be further compared with locations of seismic activity. A close match supports the suggestion that global warming is causing thermal expansion of the oceanic crust, putting stress on areas where tectonic plates meet, notably Gakkel Ridge in the Arctic. Seismic activity there in turn disturbs methane hydrates that are already more vulnerable due to temperature rises of the ocean.
By subtracting the RF for all other factors causing the temperature anomalies, the effect of the methane emitted locally would remain. Applying the RF impact of methane (including its indirect effects) to that residual value would result in a value for the amount of methane being emitted within such an area.
In a way, this suggestion does not hinge on findings of large releases of methane in the Arctic. Even if little or no methane was bubbling up, the danger would remain that this will occur in future. Furthermore, we're not looking for average temperature rises, such as in the case of gradual melting of hydrates in permafrost. Instead, we're looking for stress peaks that can trigger seismic activity that can in turn cause large abrupt releases of methane from collapsing hydrates. When conditions such as the ones below coincide, this danger can increase significantly.
Deep oceans can warm by 18% to 19% more during a period corresponding with a La Niña event. http://www.nsf.gov/news/news_summ.jsp?cntn_id=121699 Such a warming peak deep in the ocean could put enough extra stress on these areas to trigger submarine seismic activity that in turn disturbs methane hydrates.
Global warming heats up the Arctic Ocean. The shallows of the East Siberian Arctic Shelf (ESAS) will warm up fast in summer, when the sun hardly sets and now that the sea ice there has disappeared in summer.
In the shallows of the ESAS, methane will have little opportunity to oxidize in the water, instead entering into the atmosphere rapidly after release.
The Great Ocean Conveyor Belt brings warm water into the Arctic Ocean. As oceans heat up, warmer water will be moving into the Arctic, i.e. even warmer than before. Some of the heat may also translate into kinetic energy, speeding up oceans currents such as the Great Ocean Conveyor Belt, and thus pushing warmer water even stronger into the Arctic Ocean.
The sun, moon and planets under certain alignments could exercise extra gravitational pull ameliorating the situation. A team from National Central University in Taiwan found that changes in tides can cause methane to be released into the water and atmosphere from clathrates. Changes in pressure due to the tides are more prominent in shallow water, making this a likely factor in the shallows of the Arctic. See also the study Reduction of Arctic Sea-Ice Amplifies Tides, at: http://www.iarc.uaf.edu/research/highlights/2011/sea-ice-tide-amplify
Such conditions threaten to cause additional methane to be released in the Arctic Ocean, in turn causing additional local warming that accelerates local warming, in a vicious cycle that threatens to develop into runaway global warming.
The Necessity of Geo-engineering
If further confirmed, the above line of evidence shows an unacceptable risk that methane releases can cause runaway global warming within decades. Even radically and abruptly stopping of all man-made emissions of greenhouse gases will not suffice to take away this risk, since previous emissions have already set Earth on a path of several degrees of warming that is still in the pipeline.
Moreover, human activities are currently causing the release of huge amounts of aerosols into the atmosphere, partly masking the full impact of global warming. Therefore, cleaning up our act will have to be combined with the deliberate release of some aerosols and/or cloud brightening, for this masking effect to continue.
Furthermore, afforestation, combined with adding soil supplements containing biochar and olivine sand, could bring carbon dioxide levels in the atmosphere back to 280 ppm within a century, while olivine dust could also reduce ocean acidification.
Geo-engineering is an essential part of the solution, and the best of these technologies should be prepared, improved, implemented and further embraced as part of the necessary shift towards a sustainable economy. |
1998— The MELT Experiment was the largest seafloor geophysical experiment ever attempted, and one of its major components was MT, the magnetotelluric technique. MT offers a valuable tool toward the MELT Experiment’s goal of probing the earth’s inaccessible deep interior. But the technique remains something of a mystery even to many marine scientists. It has been used widely on land, particularly for regional-scale surveys, but only a few full-scale MT surveys have been carried out on the seafloor.
The primary data collected by marine MT experiments are measurements of changes in the earth’s electrical and magnetic fields at the seafloor. These fields are affected by electromagnetic currents within the earth, and here’s where MT’s apparent complexity starts—because the source of these currents is not within the earth, but rather in the ionosphere.
Charged particles, emitted from the sun as a solar wind, become trapped in the ionosphere by the earth’s magnetic field. These moving charges essentially create a variety of electric currents encircling the earth. If the earth were a perfect insulator, like space, that would be the end of the story. But the earth can conduct electricity. As these ionospheric currents flow around the earth, they generate a response within the planet itself. More specifically, the pattern of ionospheric currents induces almost a mirror-image pattern of currents within the earth.
These so-called “induced image currents” cause changes in the earth’s electric and magnetic fields. These changes depend on the conductivity of the earth’s interior, which, in turn, is determined by the composition and structure of the materials that constitute our planet’s interior. Thus, by measuring changes in Earth’s electric and magnetic fields at the surface, we can effectively deduce its electrical conductivity and reveal its interior structure. As CAT scans reveal images and frameworks that enable us to learn about the workings of the human body, MT experiments similarly provide essential cutaway views that allow us to learn about processes taking place within our planet.
Like standard alternating currents in most households, which have a frequency of 60 Hertz, or one cycle per 1/60 of a second, induced image currents also alternate—though they do so over a wide range of frequencies. The variations, or frequencies, we use in seafloor MT range from periods of about 100 seconds to several hours. These variations are caused by the chaotic nature of the events that entrap ions from the solar wind, as well as by more regular events, such as the earth’s daily orbit around the sun. The important point is that different frequencies penetrate the earth to different depths. If induced image currents came in only one flavor, we would be able to image the earth’s interior at only one depth. As it is, higher-frequency currents (with one cycle per 100 seconds, for example) don’t penetrate deeply and can tell us about structure 10 to 15 kilometers deep; the lowest-frequency currents (with one cycle per several hours) can tell us about depths of several hundred kilometers.
The goal of the MELT Experiment was to map basaltic melt, from its source within the mantle to the base of the oceanic crust at the mid-ocean ridge crest. While the earth can conduct electrical currents, most rocks, including those comprising the mantle, do not conduct particularly well. This situation changes considerably when melt is present: Pure basaltic melt is several orders of magnitude more conductive than olivine, a common mantle mineral. In the mantle melting column, we do not expect to see pure melt, nor anything like it, but rather some distribution of streams and pools of liquid melt within a matrix of solid mantle rocks. In this case, how the melt is distributed is important. It is possible to think of the melt as a network of wires that connect parts of the mantle. If the melt forms a well-connected network through the rock, electric currents can flow and the mantle will be electrically conductive. Of course, reality is more complicated and other factors, such as water dissolved in the mantle rock, can affect conductivity. These other factors are also important for understanding the whole process of melt production.
The MT component of the MELT Experiment was a truly multinational effort involving more than a dozen scientists from Woods Hole Oceanographic Institution and Scripps Institution of Oceanography in the US, and from France, Japan, and Australia. Each group contributed instruments to the array and played a role in the data analysis. From June 1996 to June 1997, 47 instruments were deployed at 32 seafloor sites to measure the time variations of the electric and magnetic fields. Two lines were set out. The main southern line had 19 sites and crossed a magma-rich segment of the East Pacific Rise ridge crest, extending 200 kilometers on either side of the crest. The second line of 13 sites crossed the ridge to the north on a magma-starved ridge segment, extending 100 kilometers on either side of the axis.
Each group’s instruments essentially did the same thing: measure changes in the electric and magnetic fields at the seafloor. But each group accomplished this in slightly different ways, deploying very different-looking instruments. As in all marine experiments, the environment makes seafloor MT measurements more difficult to make, but in one way nature helps us. The ocean is electrically very conductive and acts as a screen against electromagnetic noise—extraneous signals from other sources that would confuse interpretation of the data. On land, power lines, for example, can be a nuisance. The seafloor, however, is electrically quiet, making it possible to measure very small electric field variations. The other part of the MT signal is the seafloor magnetic field—not the steady field trapped in lavas and used to identify magnetic reversals, but the magnetic field variations linked to ionospheric currents.
To a first order, the ratio of the electric to the magnetic field at the earth’s surface is a direct measure of the earth’s electrical conductivity. We calculate this ratio for a range of current frequencies using modern processing techniques. To produce a model of the earth, data from all instruments have to be examined through a process of numerical inversion. The interaction of induced currents in the earth with the conductive bodies we hope to image (such as the melt column) affects the electric and magnetic fields over a wide region of seafloor. Generally, it is not possible to look at data from a single instrument and interpret the underlying structure. Instead, we have to use computer modeling to predict the fields that the mantle would create and compare these answers to data from all the instruments. The model is updated to improve the agreement and the process is repeated until a satisfactory model is found. There are many pitfalls involved in this process, as well as different ways of carrying it out. The groups involved in the MELT Experiment have been using a variety of methods over the past few months, and we are in the process of comparing results and discussing their implications.
The MT analyses are still in their early stages, but some first-order results are beginning to come through. The MT data show an asymmetrical distribution of melt between the areas west and east of the ridge crest, with a more extensive region to the west. The melt column also appears to be a broader feature, with a low percentage of melt in it, rather than a narrow vertical column of melt directly beneath the ridge. This indicates a more passive flow of mantle toward the ridge crest. Deeper, we see some evidence for a conductive mantle at depths greater than 150 kilometers. If this proves to be true, it could be evidence for deeper melting—deeper than the part of the mantle generally believed to be responsible for most melt generation. However, in the final analysis, water dissolved in the mantle rock may prove an important factor in mantle conductivity at this depth.
Funding for the MELT Experiment was provided by the National Science Foundation through the RIDGE Program. The many people involved in the MT component of MELT include: Alan Chave, Bob Petitt and John Bailey (WHOI), Jean Filloux and Helmut Moeller (SIO), Pascal Tarits (UniversitÉ de Bretagne Occidentale), Martyn Unsworth and John Booker (University of Washington), Graham Heinson and Anthony White (Flinders University, South Australia), and Hiroaki Toh, Nobukazu Seama and Hissashi Utada (University of Tokyo).
The conductivity of the mantle beneath the East Pacific Rise is depicted in this example of an inversion result from magnetotelluric technique (MT) data collected during the MELT Experiment. Warm colors (white, yellow, orange, and red) represent increased conductivity (lower resistivity). Cold colors (green, blue, black) represent lower conductivity (higher resistivity). The upper 50 kilometers of the mantle appear to be more conductive beneath the Pacific Plate, west of the ridge, than on the Nazca Plate, east of the ridge. The region of high conductivity, extending about 80 kilometers west of the ridge crest and 140 to 190 kilometers deep, suggests deep melting processes affected by the presence of water, or it may simply reflect the effect of water on the mantle resistivity itself.
Rob Evans (left) and Helmut Moeller of Scripps deploy a WHOI magnetometer from R/V Thompson (University of Washington). |
The human brain is composed of approximately 100 billion neurons, each with distinct functions. One of those functions is face perception—the ability to recognize what someone looks like and recall who they are. The ability to recognize faces persists for years, even though the brain is also plastic, constantly rewiring its connections. This raises a question: do the neurons responsible for face perception change due to this constant rewiring?
A team of scientists from the National Institute of Health may have an answer. They located the brain regions where face-specific neurons resided by recording activity using functional MRI (fMRI) while showing macaque monkeys images with and without faces on them. Microwire electrode arrays (tens of tiny electrodes) were then implanted into these brain regions to provide detailed records of the activity of neurons.
Initial recordings were made after implantation to obtain a baseline record of how the neurons responded to different images. This helped to isolate neurons that were responsive to faces (and to validate that the implanted electrodes actually worked). Over the course of a few weeks, the scientists then showed the monkeys a library of images—some with faces and some without—and investigated the consistency of neural responses over time.
In their analysis, the scientists used waveform shape, frequency (the number of spikes occurring within a second), and consistency to assess changes in the response of the face-specific neurons.
The results showed that face-specific neurons respond to images with faces in the same way over long periods of time—in one case, recordings went for a year. In addition, the same neurons repeatedly respond to the same faces, suggesting a sort of association or mapping: certain faces always triggered certain neurons. This suggests that the brain's ongoing rewiring process doesn't significantly affect them, and thus may not have a large impact on recognition of faces. |
Learn to Draw > History of the technique of painting
A painting has three constituent materials: the pigments, the support, and the binder, which holds the pigments to the support. The painter's craft or technique consists in the correct combination and manipulation of these three elements. "Technique" in reference to painting is often confused with style, which is the personal use of the technique, or the painter's manner. This is a psychological, artistic phellomenon, as little to be taught as a style in drawing, writing poetry, or composing music. The craft alone can be taught, and this craft is called, for the painter, painting technique.
The problems which still today beset painting technique arise from uncertainties which can be explained partly by the imperfection of the materials, and partly by the historical development of the technique. From the beginning, painters have used materials which cannot be completely understood without a thorough grounding in chemistry and related sciences.
In the old days this scientific knowledge did not exist, and even now it exists only among specialists who rarely have anything to do with the practice of the art of painting. Painters of earlier centuries used to be forever writing essays to expound their theories and problems concerning their material, unless they guarded their empirically found knowledge as a secret. An air of secrecy still surrounds painting technique, although it has been investigated scientifically since the end of the last century.
These investigations were for a long time centered around the world-famous Doerner Institute in Munich. It was named after a painter, Max Doerner, whose experimental work first made the problems of the painter known to scientists. After hundreds and thousands of years of uncertainty these unresolved questions could be answered with exactitude for the first time.
All this happened at a time when painting technique had reached its lowest depths. This had come about in the following way: in earlier centuries the apprentices ina master's studio were concerned solely with learning the craft, the only subject that can be taught. They learned to produce the colors from the natural or, sometimes even then, synthetic raw materials, to prepare the appropriate grounds, and to employ the methods needed to achieve durable and vivid results. Thus, they learned intimately the good and bad qualities of their materials. They learned also how to use these materials to carry out the ideas of their masters and of the period, and the most gifted developed new artistic resources from their thorough grounding in the material side of the art.
Towards the end of the eighteenth century, schools, or art academies, began to replace private studios, and at the same time the industrial production of ready-prepared painting materials increased rapidly. The production was not based on any systematic study, but on more or less uncritically adopted recipes, adapted to meet the widest requirements possible. Large sales were now the prime consideration; the quality deteriorated in consequence, and with it technique as well.
This situation is now quite changed. The research we referred to has resulted in the supply of excellent ready-prepared painting materials. It is nevertheless essential to be able to choose from among all the materials available, for there are still some very dubious products on the market.
The demand from technically uneducated painters compels the industry to produce goods of inferior quality. The aim of the following pages is to explain enough about the character of the individual materials to enable the reader to choose them wisely and make appropriate use of them.
There is as yet no universal terminology for painting materials, due to an unfortunate gu If between scientists, who like to be systematic, and painters, who resist the new unmellifluous words, in spite of their exactitude.
Color, both generally and professionally, means two things: the phenomenon of color, and the coloring materia' ready prepared with its binder. Pigment means any colored material before it is in a condition to be used for painting. Painters do not react favorably to any attempts to discipline their vocabulary. However, the correct chemical names for different colors are coming into general use, instead of incomplete technical terms or names derived from outdated origins of the colors. For this reason the chemical and technically correct terms are always given first place in this book. The technically correct name is often helpful in indicating the correct use of the material.
Although we shall go more thoroughly into the properties of the different materials used when we discuss the various techniques of painting, it should be noted at once that the type of binder used always prefixes the reference to prepared paints: "oil color" means coloring material mixed with an oil binder, "watercolor" means that the thick glue binder is to be thinned with water. Thus, even here these terms are not quite consistent, although sufficiently clear for use by those concerned with the techniques.
Next: Color and painting |
Possible vocabulary words are everywhere. We are bombarded with endless possibilities of words to teach our students. The words come from mentor texts, independent novels, word lists, district mandated words, academic words, reading programs, vocabulary programs…the list goes on and teachers feel the pressure.
As a Common Core state (per se), NJ teachers are working their hardest to meet the increased expectations of national standards. The Common Core State Standards (CCSS) put heavy emphasis on vocabulary, by making it an anchor standard at all levels K-12.
College and Career Readiness Anchor Standard for Language
Acquire and use accurately a range of general academic and domain-specific words and phrases sufficient for reading, writing, speaking, and listening at the college and career readiness level; demonstrate independence in gathering vocabulary knowledge when encountering an unknown term important to comprehension or expression.
I do not disagree with the CCSS attention toward vocabulary. I understand that a greater vocabulary makes more complex texts accessible to students and, in turn, increases their reading levels. And we are all familiar with the correlation between reading, vocabulary, and test scores- made clear with this popular info-graphic.
But for all its gusto, what the CCSS does not do, is tell us what the appropriate academic and domain-specific words are for each grade level. It is left to the discretion of teachers and/or curriculum planners to determine words appropriate for individual classes or grades, and this is a daunting task when faced with the wide variety of sources out there. In all the hustle and bustle of daily classroom instruction, it is this sort of huge task that can easily get put to the wayside. So, this post offers suggestions to the first burning question about vocabulary instruction:
“Where do I get the vocabulary words to teach my students?”
A firm belief I have is that prescribed word lists or programs are not the answer.
Keeping the belief of the workshop model in mind, vocabulary words taught in the classroom should be as authentic as possible. Students need to see these words coming from themselves and their experiences. The relationship between ownership, motivation, and engagement is not a secret. When students are invested in the words they are learning, vocabulary instruction will be more meaningful and you will have a very clear answer to the “Why are we learning this?” question.
My suggestion is to pull words from the following three places.
1. Your current mentor text
These would be completely teacher-selected words chosen from the mentor text that you are currently using to support your teaching. These are the words that you would have the most control over.
2. Student’s independent reading novels
Since choice is a large part of the workshop model, it is important that students have the opportunity to provide input on the words they want to learn. By allowing students to choose the words they will learn, you are tapping into their sense of ownership.
Greg Feezell, in his article “Robust Vocabulary Instruction in a Readers’ Workshop” featured in The Reading Teacher (Vol.66, Issue 3, 2012) suggests encouraging (as opposed to requiring) students to submit words to a “Word Box”. These might be words students find interesting or words that they wish to understand better. They can be chosen from texts read during independent reading time, at home, or in other subjects. Using a submission process allows you, as the teacher, to have final veto power over which words are chosen for instruction- a valuable aspect of this system.
3. Student’s writing
Have you ever noticed how when you spotlight a single student, the level of the entire class is lifted? We do this for good behavior. “Johnny, I like how you are sitting up in your chair, ready to learn.” Suddenly, the whole class grows taller as they work to emulate Johnny and seek our approval and praise. Kids want to be their best selves, but sometimes they need to be reminded of just how great they can be. This method works to tap into that phenomena.
While working with Teachers College Reading and Writing staff developer, Emily Strang-Campbell, in some of our district’s 4th-8th grade classrooms, a technique that spotlighted student vocabulary use in their writing was introduced. During independent writing time, when students were writing fast and furious, Emily walked the room to note students that had used strong vocabulary in their writing. During the mid-workshop interruption she gave selected students a shout-out compliment and started a list of their words on the board. As the students went back to work, Emily suggested that everyone push themselves and try to use one of the strong words listed or any other strong word they know in their writing during the final stretch of independent writing time that day. You would have been amazed with what the students accomplished! The power of a compliment.
We can tap into this and choose vocabulary words for instruction from the words that were spotlighted from student writing. Imagine the power these words would have for students, coming from the pens of their own classmates!
These sources give you valuable, authentic vocabulary words that students will be invested in. The next step of the process is choosing which specific words to teach from these sources. My second post of this series will offer suggestions to do just that!
Are there are any other authentic places where you pull vocabulary words from for instruction? I would love to hear from you!
Let’s keep the conversation going-
6 thoughts on “The Burning Vocabulary Question Series: Where do vocabulary words come from?”
Great point on vocabulary…choice and authenticity make much sense!
Thank you! Student engagement should always be a focus in the work we do. I think guidelines on how to make student choice a possibility are extremely helpful in the real-world of daily classroom life. We want to stay accountable and rigorous. More on that is to come. 🙂
Sounds like a district goal…along with classroom libraries…
I love the idea of the mid teaching workshop point to shout-out high vocabulary. Did she give a quick definition of the words so students could try to use them or was it more organic and discovery-oriented where they just tried?
Thank you for visiting my blog. The students read the sentence the word was in, so the class heard the word in context. Emily did not do any explicit teaching around the word at that moment. My thought for using these words for vocabulary instruction would be to follow this “shout out” with formal instruction of the word next (for those words chosen by the teacher). I’ll be writing about types of vocabulary instruction as part of this series as well. Check back! 🙂 |
At Willowbrook School we have worked hard to develop a reading strategy that:
- Is underpinned by robust research;
- Supports pupils' success in other aspects of English, such as Spelling and Writing;
- Enables us to be forensic in identifying areas of weakness in a child's reading 'journey', so that we can respond in a timely and effective way.
Our Reading strategy is based heavily on the evidence-informed book, "The Art and Science of Primary Teaching" by Christopher Such. The book describes reading as "the product of decoding and language comprehension, where weaknesses in either will lead to difficulties with reading comprehension." Essentially, for a child to be able to comprehend what they are reading, they need to be able to read fluently, have a broad vocabulary and a deep knowledge of the world. For this reason, our reading strategy incorporates all three of these elements:
1. Developing fluency
Our reading strategy has a heavy emphasis on supporting children to become confident, fluent readers. This means beginning with a robust, systematic phonics programme, and we use Sounds-Write for this. Alongside continued phonics teaching in Y2 (and beyond), children participate in daily whole-class reading sessions, where children engage in 'extended reading' (prolonged engagement with longer extracts of texts). Through this, children improve their accuracy, pace and prosody - the three components of fluent reading.
2. Improving vocabulary
A wide range of challenging vocabulary is embedded in our curriculum, and teachers also explore new words at every opportunity during the normal school day. We also include some discrete vocabulary instruction in pupils' KS2 reading 'diet'. This involves children engaging with 'tier two words' or Latin/Greek root words. In these sessions, children deepen their knowledge of the English language and are given lots of opportunities to revisit/retrieve knowledge of previously learned vocabulary.
3. Knowledge of the world
When a child tries to comprehend what they are reading, they draw on their own background knowledge to try to make sense of the context and the language being used. Creating this mental representation of the situation being described is called a 'situation model'. With this in mind, we think it is vital that our curriculum is knowledge-rich, as this enhances children's schema-development and gives them a deeper knowledge of a wide range of people, places and concepts. We believe that a knowledge-rich curriculum is an essential component of an effective reading strategy.
Other aspects of becoming a confident reader - such as understanding text structures, syntax and metaphors - are developed through shared exploration of quality texts/extracts. This happens during reading instruction activities such as 'Close Reading' and 'Shared Reading'.
Is there a place for generic comprehension 'skills'?
Comprehension skills (or strategies) are best thought of as a set of tricks that are quickly taught and that can help children when completing written comprehension tasks/tests. While we do see some value in teaching these skills, they are not a central component of our reading strategy, but something we increasingly focus on in Upper Key Stage Two. The three components described above are what help to make a truly effective reader. Comprehension skills are not generic/transferable, so teaching them in such a way is unlikely to be very effective. |
English for Speakers of Other LanguagesESOL instruction provides English Learners with a language focused environment to develop the academic language necessary to demonstrate complex thinking and learning across the disciplines. Students learn to recognize and use language explicit to the task in the four domains of reading, writing, speaking, and listening through engaging interactions with teachers and peers.
The Georgia Department of Education provides the following guidelines for identifying students who may be identified as an English Language Learner (ELL):
· Students whose native/home/first language is other than English.
· Students who have difficulty speaking, reading, writing or understanding the English language to the extent that their language skills would be a barrier to their success in the classroom.
· The ESOL teacher works in collaboration with classroom and other special program teachers to support the acquisition of language and content skills.
· The ESOL program promotes each student's appreciation of their ability to perform a wide variety of intellectual and physical activities.
· The ESOL program encourages the student's positive recognition of a variety of cultures, recognizing similarities and differences.
For questions or for more information please contact:
ESOL TEACHER: Mrs. Colleen Purdy [email protected] |
A common problem writers face is whether to use a comma between two clauses. The solution depends on whether one the clauses is essential or non-essential. Following are some useful guidelines.
Let’s start by exploring the difference between essential and non-essential clauses.
An essential clause contains words that modify the main thought of a sentence in a way that’s important to the main thought. Without the essential clause, the main thought would be unclear or incomplete. An essential clause is never set off with a comma or commas. Essential clauses usually begin with a subordinating conjunction (see below). Examples:
John stayed home from school because he was sick and contagious. Being “sick and contagious” is essential to the reason John stayed home, as opposed to any other possible reason for John staying home.
The woman dressed all in black was either a widow or an interesting dresser. The information about the woman would not likely be true of any other woman in view, so it’s essential.
A non-essential clause contains information that’s not necessary to complete the main thought. Commas are inserted in order to separate it from the rest of the sentence. Non-essential clauses also usually begin with a subordinating conjunction (see below). Examples:
John stayed home from school today, even though he wasn’t really sick. The main thought “John stayed home from school today” would be true, without any further explanation. It’s a non-essential clause, so it gets a comma inserted.
Jane, who dyes her hair, is my best friend. Jane is my best friend, regardless of whether she dyes her hair or not. So that clause is non-essential and is set off by commas.
A subordinating conjunction is a connecting word or phrase that joins a subordinate clause to a main clause. The subordinate clause is used to redefine or modify the main point of the sentence. Subordinate clauses may come first in a sentence, but they’re still subordinate. Why? Without the main point, they can’t stand alone.
Most subordinating conjunctions consist of a single word (see list below). However, some of them may contain more than one word (again, see list below). There are several subordinating conjunction categories, based on the kind of meaning they express.
- Subordinating conjunctions about cause—These explain why an activity in the main clause happened. Words include: as, because, in order that, since, so that. Example: I like to hiking and sightseeing because they’re stimulating.
- Subordinating conjunctions about condition—These contain information about the circumstances under which the main clause will be performed. Words include: even if, if, in case, in the event (that), just in case, only if, provided (that), unless, whether. Example: Even if it rains on the weekend, we’ll still go camping.
- Subordinating conjunctions about time—These determine a moment or interval when the main clause will be performed. Words include: after, as soon as, as long as, before, by the time, every time, now that, once, still, till (or ’til), until, when, whenever, while. Example: He re-hid his little “treasure” after everyone had left.
- Subordinating conjunctions about place—These determine location of activities in the main clause. Words include: where, whereas, wherever. Example: He hid his little “treasure” where he was sure that no one would find it.
- Subordinating conjunctions about contrast—These modify the main clause in the context of the process being discussed. Words include: although, as if, as though, even though, in contrast to, just as, though, whereas, whether or not, while. Examples: Henry went to the movies, even though he’d been told not to. John never misbehaved, whereas his sister Jane was very mischievous.
Some other common subordinating conjunctions include: how, inasmuch, than, that, who.
If you’re a writer, you need to know about essential and non-essential clauses and about subordinate conjunctions. This includes when and how to use commas. Without this knowledge, your meaning of a sentence may change or not be understood by readers.
Copyright © 2018 by Affordable Editing ServicesShare This: |
Tendons are tough pale/whitish cords that attach muscles
(Compare with ligaments, which attach bones to bones and are also inelastic, yet flexible.)
The origin of a tendon is the point at which it is connected to a muscle.
Tendons consist of water, type-I collagen, cells called tenocytes, minor fibrillar collagens, fibril-associated collagens and proteoglycans. Together, these components form many parallel bundles of collagen fibres that are inelastic (that is, they do not stretch in length), yet flexible (that is, they can move and adopt different shapes as needed).
Collagen fibres from within muscles are continuous with those of the attaching tendon.
Tendons insert into bone at specific locations or junctions (between muscle and bone) that are called an "enthesis". At these positions the collagen fibres are mineralised and integrated into bone tissue.
Tendons concentrate the mechanical pull of muscles onto specific, small, areas of bone. This is enables efficient movement of the structure of the body, more specifically - movement of bones relative to other bones.
Chronic overuse of tendons can lead to microscopic tears within the collagen matrix, which may gradually weaken the tissue. Some sports/remedial massage therapists may treat tendon injuries, as may physiotherapists.
This section consists of short summaries about the structures that form the muscles of the body. This list is not exhaustive but is intended to be appropriate for students of A-Level Human Biology, ITEC courses in massage and related subjects, and other courses in health sciences. For more general information about muscles see the pages about:
This section is about the
anatomical structures of muscles.
- Anterior Muscles
- Posterior Muscles
- Facial Muscles
- Muscle Terminology (Definitions)
- 1. Structure of Muscle
- 2. Structure of Muscle Cells
- 3. Muscle Filaments
- 4. Sliding Filament Theory
- 5. Neuromuscular Junction
- 6. Actions at Neuromuscular Junction
- Types of Muscle Contractions
- Muscular Disorders
- Effects of exercise on muscles |
Martin Luther in German Historiography
Summary and Keywords
What does Martin Luther mean for Germany? Formulated in such a way, this is an impossible question, due in no small measure to the existence of many “Luthers” and many “Germanys.” But it also invites historical investigation. Luther has long held a privileged position in the writing of German history, stretching back to his own lifetime, even if the exact nature of that position has hardly remained static or uncontested. Luther’s position in the annals of German historiography testifies to the influence of social and political upheavals on the way in which historians understand the past—and vice versa. Each era’s critical events have encouraged certain aspects of Luther’s person and work to be remembered and others to be forgotten.
Like swapping between telephoto and wide-angle lenses, historical perspectives have moved between a narrow concentration on the German reformer’s biography and theology and a broader focus on the Protestant movement he launched in Germany. Historians have regularly enlisted Luther in an expansive, sweeping vision of the German Reformation and the emergence of the modern German nation-state with Otto von Bismarck. Indeed, contemporary ideas of nation and nationalism have had a determining influence on interpretations of Luther. This is true as much for German historians like Leopold von Ranke, writing toward the beginning of history’s professionalization as a full-fledged, independent academic discipline in the first half of the 19th century, as it is for those surveying Luther in the midst of the First World War, in the aftermath of Hitler and the Nazi era, in the postwar German Democratic Republic in the East and Federal Republic of Germany in the West, on the cusp Germany’s “turning point” (die Wende) of 1989–1990—and even for historians now situated in the 21st century.
Luther, the Reformation, and the Nation
Like a threefold cord, the threads uniting Martin Luther, Germany, and German historiography have proven resilient. Luther could speak fondly of historians. Writing in To the Councilmen of All Cities in Germany that they Establish and Maintain Christian Schools (1524), he confessed: “How I regret now that I did not read more poets and historians, and that no one taught me them!”1 In the best known of his later introductions to contemporary pieces of historical work, the Preface to Galeatius Capella’s History (1538), he judged: “The historians, therefore, are the most useful people and the best teachers, so that one can never honor, praise, and thank them enough.”2 The manner in which German history has spoken of Luther, however, is not quite so clear-cut, even five hundred years on.
Luther and the German Reformation have together remained one of the most important themes of German historical writing since the middle of the 16th century—and especially so since the modern historiographical developments associated with Leopold von Ranke (1795–1886). The amount of attention paid to the Reformation across nearly all fields of German historical scholarship has seemingly few parallels. One might suggest, though perhaps not to the same extent, that the French Revolution has occupied a similar place in French historiography, or the Civil War in American historical work, as “the crucial and (in a quite literal sense of the term) epoch-making event by which the nature of an entire national community and of its history has been defined.”3 The excellent German historian Thomas Nipperdey (1927–1992) attested to the connection, observing: “One who was born a Protestant, as I was, and who does not take this to be an accident of birth but accepts it deliberately, is inclined to set a high, positive value on the constitutive significance of Luther and Lutheranism for the history of modernity in Germany, for the formation of personality and behavior, of society and culture.”4
Major narratives of German history and German “nationhood” have included Luther as the leading participant. As with Luther’s modern theological reception, the Luther jubilees commemorating his birth and death dates and the publication of the 95 Theses (the Thesenanschlag) precipitated much of the interest, filtered through each era’s trends both in historiography and across the wider scholarly world. Yet Luther himself was not always at the center of scholarly attention, even when “Luther’s Reformation,” to use the all-too frequent, simplified misnomer, was.
Seeing with Ranke
It might not be too much of a stretch to begin an inquiry into the subject of Luther in modern German historiography with the statement, adapted from Nipperdey’s famous line: in the beginning was Ranke, the most widely read historian in 19th-century Germany and, indeed, Europe.5 Without advocating for a truncated view of German history that loses sight of Germany’s deep historical past, one could just as reasonably begin, like Nipperdey’s history of modern Germany, with Napoleon. Memory of the toppling of Napoleon with the 1813 Battle of Nations near Leipzig blended together with commemorations of Luther’s 95 Theses in the infamous Wartburg Fest near Eisenach on October 18–19, 1817. Luther’s image in the Wartburg event, alongside other celebrations across German-speaking Europe, rose to new heights. Frustrated with some of the official jubilee preparations across the German lands over the course of the summer of 1817, Johann Wolfgang von Goethe confessed in a letter: “The clerics and schoolmasters are a constant plague, the Reformation is to be glorified in countless writings … But I am afraid that all these efforts are going to make everything so clear that the figures will lose their poetic and mythological colors. Because, between us, there is nothing of interest in the whole thing except Luther’s character, which is also all that really impresses the people. The rest is complicated rubbish and a burden to us every new day.”6
The celebratory activities and their religiously and politically suffused rhetoric seemingly stirred a young Ranke to begin writing a biography of Luther, the now legendary “Luther Fragment.”7 His aim was to uncover the real Luther of the 16th century, separating the past from modern myths that reinterpreted the reformer as a political revolutionary—to find the Luther of history, in other words, rather than the Luther of hagiography, the latter long in existence but reemerging with a vengeance in the jubilee. Scholars have questioned both Ranke’s version of what induced this study of Luther and the identification of the fragment with the origin of Ranke’s Reformation history two decades later.8 It seems rather more likely that Luther held Ranke’s scholarly interest well before the appearance of the rose-colored, morally inspiring literature from the jubilee.9 Still, Ranke’s Luther arose from his interaction with a variety of Luther’s own writings, and thus differed from the image constructed by the Wartburg participants, mostly members of the radical nationalist student fraternities (Burschenschaften) who, in their clamoring for national unity, constitutional freedom, and a liberal pan-German government, sought to evoke Luther’s protests in their own fight against any and all perceived tyrannies. Victory on the battlefield near Leipzig, they held, had restored Luther’s liberal thought, which could now be applied to contemporary circumstances.10
Ranke’s more complete interpretation appeared in his masterful six-volume Deutsche Geschichte im Zeitalter der Reformation (1839–1847), which for the most part remained the standard treatment of the German Reformation through the end of the 19th century.11 As the opening line of the first volume revealed, the Deutsche Geschichte, much like Ranke’s earlier history of the popes, would integrate ecclesiastical and political history.12 The same approach would characterize its discussion of Reformation theology, integrating doctrinal and institutional aspects; as the sixth volume noted: “Our interest is not in the permutations of doctrine, but in the great oppositions.”13 Luther stood for reform, but his call was measured, not hurried. Politically, he bore no resemblance to radicals. Instead, he was “one of the greatest conservatives ever to have lived.”14 The resulting picture was one of domestic change, not revolution. But as the climactic event in German history, “Luther’s Reformation” also became the flashpoint for modern world history.
Ranke’s description of Luther before the Imperial Diet of Worms in 1521 displayed his concerns well:
One could almost be tempted to wish that, this time, Luther had been willing to stay put. It would have strengthened the [German] nation in its unity … if [Germany] had undertaken a common battle against the secular power of Rome under his leadership. But the answer is: the power of this spirit [Luther] would have been broken if any consideration had swayed him that was not thoroughly religious in content. He took his start not from the needs of the nation, but from religious convictions, without which he would never have accomplished anything. The eternally free spirit (der ewig freie Geist) moves on its own paths.15
Hints of Ranke’s overall paradigm also emerge here, in which the Reformation should have brought about the marriage of cultural, religious identity and the German state, but ultimately did not. Though “religious convictions” motivated Luther, Luther might also have contributed more to Germany’s national regeneration. The somewhat separate German movements for religious reform and national unification were, after all, equally interested in deliverance from Rome. “Never before,” said Ranke, “had there been a more favorable prospect for the unity of the nation and its continuation on the chosen way.”16
Luther’s rise to prominence represented “the most important thing for the future of the German nation … whether the nation would succeed in breaking away from the papacy without endangering both the state and its slowly and painfully acquired culture.” Concerning “the triumph of the Protestant system in all Germany,” Ranke wrote, “apart from any doctrinal perspective, from the purely historical point of view, it seems to me that it would have been the best thing for the national development of Germany … The fundamental strivings which now characterized the lives of the German Protestants gave a fulfilling context to the national consciousness.”17 But the survival of Catholicism consigned Germany to internal confessional and territorial divisions until the heady experiences of the 1870s: Prussian triumph over France, the Vatican Council’s decree on papal infallibility, and the push by Otto von Bismarck (1815–1898) for German Unification.
Nevertheless, Luther’s crowning moment in Ranke’s history occurred in the spring of 1522 when Luther came out of hiding at the Wartburg in order to put an end to the unrest and agitation that Wittenberg had been experiencing at the hands of Andreas Karlstadt and the Zwickau prophets. In highlighting this event, the Deutsche Geschichte laid the groundwork for many reassessments of the reformer. “Never had Luther appeared in a more heroic light,” proclaimed Ranke. “He bid defiance to the excommunication of the pope and the ban of the emperor, in order to return to his flock; not only had his sovereign warned him that he was unable to protect him, but he had himself expressly renounced his claim to protection; he exposed himself to the greatest personal danger, and that not (as many others have done) to place himself at the head of a movement, but to check it; not to destroy, but to preserve.”18 Indeed, of Luther’s return, Ranke summed up: “At his presence the tumult was hushed, the revolt quelled, and order restored … It thus became possible to develop and extend the new system of faith, without waging open warfare with that already established … Even in the theological exposition of these doctrines, it was necessary to keep in view the perils arising from opinions subversive to all morality.”19
Ranke’s Luther, in the end, was a tapestry comprised of careful readings of historical texts and other sources, contemporary national politics, cultural acts of remembrance, and longstanding confessional polemics. “Ranke did not, perhaps even could not, relate the theology of the Reformation to the Augustinianism and Paulinism upon which it drew.” Yet Jaroslav Pelikan’s verdict is apt: “What [Ranke] could do, and did do brilliantly, was to set the theological doctrines of the Reformation era into the context of the Reformation as a church movement [and] … to set the Reformation as a church movement, in turn, into the context of German and imperial politics in the first half of the 16th century.”20
The Luther-to-Bismarck Story
Until roughly the Great War, the storyline of Luther to Bismarck, to use its traditional name at least since the 1949 study by Karl Kupisch (1903–1982), enjoyed special status in modern Germany’s creation myth.21 While Ranke’s union of political and ecclesiastical history had nurtured its growth, the national-political narrative sprouted from the widespread belief that Luther’s message was peculiarly applicable to the needs of the German soul—whether in the 16th century or in the 19th and early 20th. Through the achievement of German unification in 1871, particularly when seen against the backdrop of the failure of the 1848 German assembly at Frankfurt, Bismarck came to represent for many the cultural-political success that Luther’s religious movement had set in place. When Friedrich Schleiermacher (1768–1834), the Berlin Reformed theologian, professed that Protestants must “spread the reformation among the German peoples as the form of Christianity most properly suited to them”—only then could one “allow the continued existence of Catholicism for the Latin peoples”—his words did not strike his audience as particularly novel.22 They had become commonplace in historical reflection on Luther. Ranke held that Luther’s Reformation was “a product of the distinctive German genius,” a sentiment shared by Ranke’s contemporaries as well as Johann Gottlieb Fichte (1762–1814), Johann Gottfried Herder (1744–1803), and writers even in the early 18th century.23
The Bismarck-led “culture war” (Kulturkampf) waged against German Catholics during the imperial era or Kaiserreich represented for many the “struggle for civilization,” as Fritz Fischer (1908–1999) put it, which naturally accompanied the 1871 military victory over France.24 The height of this “joyous assault” appeared in the Luther jubilee of 1883, which not only gave birth to the magisterial critical edition of Luther’s works, the Weimarer Ausgabe, alongside important scholarly associations such as the Society for Reformation History (Verein für Reformationsgeschichte) but also “assumed the character of a kind of belated birthday party for the new Germany.”25 The title Luther: Ein deutsches Heldenleben (Luther: A German Hero’s Life, 1862) by Adolf Schottmüller (1798–1871), historian at the University of Berlin, serves only as one significant case in point.26
Perhaps most notable of the historical reflections on Luther arising from the 1883 jubilee is that from Heinrich von Treitschke (1834–1896). A Prussian nationalist historian, Treitschke had lectured briefly in Kiel and Heidelberg and from 1874 onward held a prestigious post at the University of Berlin. In 1871, he became a member of the Reichstag. Treitschke held more or less liberal views, especially in religion. He remained, moreover, fiercely anti-Jewish—and anti-Catholic, despite his Catholic wife.27 He had become a kind of “popular idol among German professors” and could easily fill the largest of lecture halls in the university. Even his sister, curiously, characterized him as a more academic version of Martin Luther.28
Treitschke delivered his 1883 jubilee address in Darmstadt, titled “Luther and the German Nation.” Like much of its contemporary historiography, it imbibed the principles of Romantic nationalism and the Kulturkampf of its time.
No other modern nation can boast of a man who was the mouthpiece of his countrymen in quite the same way, and who succeeded as fully in giving expression to the deepest essence of his nation … “Here speaks our own blood.” From the deep eyes of this unrefined son of a German farmer flashed the ancient and heroic courage of the Germanic races—a courage that does not flee from the world, but rather seeks to dominate it by the strength of its moral purpose. Because he gave utterance to ideas already living in the soul of his nation, this poor monk … was able to grow and develop very rapidly, until he had become as dangerous to the new [Catholic] Roman universal empire as the assailing Germanic hordes were to the empire of the Caesars.29
Simultaneously praising Luther’s impact on the German language (he made it possible “for God to speak German to the German nation”); putting a new spin on Luther’s teaching on the doctrine of the two kingdoms, which now offered greater justification for “temporal” state sovereignty and political emancipation from “spiritual powers” in an anti-Roman key; and championing Luther’s reforms in German education, Treitschke concluded that Luther was “the pioneer of the whole German nation” who possessed “all the native energy and unquenchable fire of German defiance” and whose “power of independent thought typifies the German character.”30
Treitschke countered suggestions that it was the Italian Renaissance, rather than the German Reformation, which marked the first light of the modern world—an ideologically fraught debate that studies like Jacob Burckhardt’s The Civilization of the Renaissance in Italy (1860) had helped to catalyze.31 “The Italians lacked the strength to act on their ideas, and, against their own conscience, they continued to obey a church they derided. The Germans, by contrast, dared to shape their lives according to the truth they had discovered,” he reasoned. “Because the historical world is the world of the will and because actions, not thoughts, determine the fate of nations, the story of modern man begins not with Petrarch or the artists of the Quattrocento, but with Martin Luther.”32
Germany’s “poor monk” had thrust open the doors to modernity, Treitschke held, but he also left behind an unfulfilled mission: overcoming the divisions of the nation, which had experienced additional ruptures in the march of historical progress from the 1520s through the Thirty Years’ War to the present day. “To close this gulf, to revive evangelical Christendom in such a way that it might become capable of ruling our entire nation,” proclaimed Treitschke, “that is the task that we recognize.”33
Luther and the Kulturkampf
By the end of the century, the Luther-to-Bismarck line had achieved near canonical status, even if some notable Catholic historians still objected strongly to the hegemony of the German Protestant historical paradigm.34 “How much indeed have we Catholics allowed ourselves to be led astray concerning our own past from Protestant and Protestantizing ‘architects of history’!” lamented the Ultramontane Catholic historian and priest Johannes Janssen (1829–1891).35 Among dissenting histories of Luther and Luther’s Germany, Janssen’s own imposing eight-volume Geschichte des deutschen Volkes seit dem Ausgang des Mittelalters (1878–1894) warrants mention. Janssen built upon the earlier arguments of the eminent Catholic church historian in Munich Ignaz von Döllinger (1799–1890), who had attempted a counter-narrative to Ranke’s History of the Popes (1834–1836) that traced out what he saw as disastrous effects of Luther’s theology through the Enlightenment and the French Revolution.36
More mainstream (and Lutheran) historians, for their part, dismissed Janssen’s work, as did the towering historian Max Lenz (1850–1932), one of the preeminent historical thinkers of the Kaiserreich.37 A participant in the 1883 jubilee, which led to Lenz’s own Luther biography, Lenz returned to these themes in the 1917 jubilee as well, giving an address in Hamburg titled “Luther and the German Spirit.” Subsequently, he published a collection of essays under the title Von Luther zu Bismarck.38
Toward the close of 1917, Germany’s political and religious communities faced a series of crises, which shook the historical profession and Luther scholarship as well, as scholars such as Erich Marcks (1861–1938) testified.39 As Lenz put the matter: “Would Luther recognize our war as his own? Is there a bridge that leads from Luther’s religion to the new Germany, to the life interests and ideals we are fighting for, to the things that we hold sacred?”40 The continued success of Ranke’s commitment to tell the truth about the past for its own sake, “as it actually happened” (wie es eigentlich gewesen), in place of the Ciceronian-humanist ideal of “history as the teacher of life” (historia magistra vitae), was not altogether clear.
Even as Protestant historians continued to elevate Luther to a national icon, hero, and harbinger of the modern German state, both the Kulturkampf and the incipient Luther Renaissance triggered something of a backlash for Luther studies. The Austrian-born, Dominican historian Heinrich Denifle (1844–1905) figured prominently among these revisionist scholars. Denifle lectured in Graz and was influenced by the work of Janssen. In 1883 he began working as an archivist in the Vatican Archives in Rome. As the first volumes of the Weimarer Ausgabe found their way from printer to bookshop, Denifle censured the editorial work for its lack of attention to patristic and medieval ideas flowing into Luther’s thought. Making use of the Vatican’s copy of Luther’s lectures on Romans—well before the lectures were published in 1908 and before they appeared in the Weimarer Ausgabe in 1938—Denifle argued that Luther’s understanding of medieval mysticism and nominalist philosophy was essentially misguided, and that Luther’s new doctrine of justification by faith alone would lead inexorably to immorality.41 His rediscovery of the Romans lectures and the accompanying citations trumped many Lutheran scholars who did not yet have access to the manuscript.
Denifle also advanced a psychological reading of Luther’s development as a kind of repressed monk, as did the Jesuit church historian Hartmann Grisar (1845–1932) at the University of Innsbruck, who, while moving in similar directions, attempted to push beyond the palpable disdain that Denifle expressed for Luther.42 In Ernst Schulin’s estimation, “Were it not for Denifle’s iconoclasm and Grisar’s vituperations—deterring even Catholics from reading these learned works—they might fairly be seen as the spearhead of a new type of analytical research into the interdependency of late medieval theology and Luther’s religious thinking, and of the connected probes for the precise moment in time of Luther’s ‘apostasy’ or reformative breakthrough, and, not least, of the psychological Luther interpretations.”43 Nevertheless, the ensuing debate began to underscore in new ways the question of Luther’s assumed relation to modernity on the one hand and his roots in the medieval world on the other.
Luther historiography until at least 1917 continued to revolve mostly around the reformer’s role in the formation of the German nation, anti-Catholicism, and German cultural, educational, and linguistic development. World War I and its aftermath changed things. Germany’s defeat in 1918 “utterly robbed the Luther-to-Bismarck narrative of its plausibility.”44
Luther’s popularity as a topic for historical investigation dwindled in the Weimar era, though some universal historians, church historians, theologians, and political figures continued, often controversially, to appropriate many of Luther’s ideas.45 Apart from the joint historical-theological Luther Renaissance, there were relatively few engagements. Among them was the 1925 biography by Gerhard Ritter (1888–1967), which essentially made Ritter’s reputation as a prominent German historian.46 Like many of the same generation, Ritter’s experience of the war left him with the sense that the “bridge” of which Lenz spoke had crumbled, the pieces of its foundations scattered across the European trenches. But Luther could still provide a kind of mirror for understanding German identity. Luther, Ritter famously concluded, “is ourselves: the eternal German.”47 It may, in fact, be insightful to read Ritter’s work alongside that of the French historian Lucien Febvre (1878–1956), one of the founding fathers of the French Annales school of history, whose own Luther biography appeared three years later.48 Ritter’s description followed the predominant practice of seeing Luther mostly in the German context. Febvre similarly affirmed this. “Luther is, in all things, of his race and of his country,” wrote the latter. “He is, fundamentally, a German.”49 Yet Febvre’s use of sources, especially recently produced editions of Erasmus’s correspondence, at least held out the possibility of a broader European perspective of Luther.
Increasingly, Ritter looked away from the normative and generally presentist focus on Luther as a German revolutionary. From the 1925 biography to his last published work on Luther, a popular essay from 1947, he came to see Luther as the leader of a spiritual cast of Christianity whose line of sight moved inward rather than outward in the direction of political revolution. The relative weakness of the German state, somewhat paradoxically, allowed for the preservation of Luther’s spiritual message, even as France’s strong state extinguished the Protestant message and England’s mixed state transposed it into a superficial morality.50 But Ritter’s most important contributions did not come from his Luther biography, which did not represent a significant advance in Luther scholarship. Rather, it was in his role as editor of the important journal Archive for Reformation History (Archiv für Reformationsgeschichte—ARG), a position he assumed in 1938, that he left his mark.
The ARG first went to print in 1903 under the auspices of the Verein für Reformationsgeschichte. Under Ritter’s leadership, the journal helped to reshape the study of German Reformation history, broadening the field’s vision by incorporating “the global effects of the Reformation” (die Weltwirkungen der Reformation). The journal’s editorial from November 1938, composed by Ritter, Heinrich Bornkamm (1901–1977), and Otto Scheel (1876–1954), represented a clear turning point in the historiography of the German Reformation:
The Reformation is a major achievement of the German spirit (Geist), and its historical understanding must be preserved by the whole of the German people. However, this task can be accomplished only by using a historiography that is based not on specialized and fragmented research but on a universal approach. It cannot be reduced to “church history” or “secular history” or “political history.” … The [ARG] is not concerned with the history of the Protestant churches as such, but rather with the history of the period of the Reformation and the following epoch before the Enlightenment, which was mostly determined by religious interests.
The goal was to bring about “truly modern Reformation research that unites theological, political, legal, and socioeconomic and philosophical methods.”51
If not necessarily transforming Luther scholarship directly, the new approach advocated by Ritter, Bornkamm, and Scheel nevertheless broadened the scope of interest.52 Luther would no longer remain only a national-political hero, just as “Reformation” in the singular would, however gradually, come to be seen as “Reformations” in the plural. The straightjacket of grand narratives was loosened—or at least so it seemed.
From Luther to Hitler?
That Hitler’s “seizure of power” (Machtergreifung) in 1933 coincided with the 450th anniversary of Luther’s birth was surely a striking coincidence. It was no coincidence, however, that some “historically and politically conscious contemporaries manufactured connections between Luther and Hitler” that same year, as Hartmut Lehmann has observed.53 The year of 1933 marked only the beginning of the Luther-to-Hitler story—a name owing to popular titles from the American polymath William Montgomery McGovern in 1941, the German refugee schoolmaster Peter F. Wiener in 1945, and the American journalist William L. Shirer in 1960.54 Concentrating mostly on texts like Luther’s late On the Jews and Their Lies (1543), these works enrolled Luther as a forerunner to modern fascism in nearly all aspects of life, destroying any sense of German morality.55 For Wiener, even Luther’s marriage to Katharina von Bora, a runaway nun, proved that “degradation of womanhood and the taking away of all the sacred character of marriage is one of the main reasons why Germany with Luther began its unchristian way down the hill.”56 In Germany, this connection achieved initial popularity by Hans Preuß (1876–1951) in a remarkable comparison from 1933 of Luther’s curriculum vitae with Hitler’s.57
If the crude, reductionistic narratives that traced Luther’s Reformation to the Third Reich occupied the thoughts of certain historians during the ascendancy of National Socialism, the postwar period, again coinciding with a Luther jubilee in 1946, witnessed a new iteration of the discussion. The conversation still focused on the war but now wrestled with the matter of German guilt and the Allied prosecution of war crimes through the Nuremberg Trials. In one formulation, from the German pastor Hans Asmussen, the dominant question was: “Should Luther go to Nuremberg?”58
Certain concepts of Luther’s (race; das Volk) received new attention, but not always in careful historical perspective. In the words of Thomas Mann from an address before the American Library of Congress in May 1945: “Martin Luther, a gigantic incarnation of the German spirit … I frankly confess that I do not love him. Germanism in its unalloyed state, the Separatist, Anti-Roman, Anti-European shocks me and frightens me, even when it appears in the guise of evangelical freedom and spiritual emancipation.” Mann desired to cast “no aspersions against Luther’s greatness.” But this had to be qualified. Luther was “great in the most German manner, great and German in his duality as a liberating and at once reactionary force, a conservative revolutionary. He not only reconstituted the Church; he actually saved Christianity…. He was a liberating hero—but in the German style, for he knew nothing of liberty.”59 Luther remained indelibly inscribed in the contradictions both of modern German history and of his modern interpreters.
Luther and the Sonderweg Thesis
Among professional historians, postwar attitudes toward Luther increasingly turned on the so-called “Sonderweg thesis,” the notion that Germany had taken a divergent, special authoritarian path to modernity, in contrast to the “normal,” democratic route taken by its English and French neighbors.60 Debate over the Sonderweg thesis erupted particularly, though not exclusively, among many German-speaking émigrés, often Jewish, who fled the Nazi regime.61 For these interpreters, Luther and Lutheranism tended to function as one of a series of missteps responsible for Germany’s “irregular” development and only partial modernization.
Though sometimes provocatively stated, the actual arguments were in reality more nuanced than the Luther-to-Hitler literature. Fritz Stern (1926–2016), the German-born historian who emigrated with his parents in 1938 and was himself a proponent of the Sonderweg thesis, observed nevertheless “how complicated the question of Nazi roots really was; all the tomes and slogans about Germany’s inevitable path ‘from Luther to Hitler’ seemed puerile and wrongheaded.” “I always thought,” said Stern, that “the theme ‘from Luther to Hitler,’ suggesting that Hitler was the culmination of old Germanic traditions of authoritarianism and militarism … was the negative version of the National Socialist creed that proclaimed Hitler as the savior old German virtues, the crowning of German history.”62 Both were invented histories, sidestepping the ideological appeal of certain theologians and historians to Luther’s condemnations of Jews, Turks, Anabaptists, or peasants—condemnations which were regularly wrenched from their 16th-century contexts and plunged into the 1930s and 1940s.63
Historians such as Fritz Fischer, who was known primarily for his investigations into the German origins of World War I, contrasted Luther’s influence in Germany with John Calvin’s in Western Europe, picking up on earlier suggestions made by Ernst Troeltsch (1865–1923).64 Calvinist social thought had given rise, he argued, to a political theology of resistance; Luther’s mostly spiritual focus, however, left German Protestants resigned to the dictates of authoritarian power.65 Hajo Holborn (1902–1969), who had studied with Friedrich Meinicke (1862–1954) and taught in Heidelberg and Berlin before leaving for the United States in 1934, concluded that Germany’s Lutheran territorial churches, under the formative influence of Luther’s approach to religion and politics, left German Protestantism “particularly vulnerable to the National Socialist onslaught,” the cause of which “lay in the nationalistic and reactionary spirit that had found a home in these churches.”66 Though the Sonderweg thesis fell out of favor in most circles in the mid-1980s, many of the implicit claims concerning Luther’s influence still make their mark on Luther scholarship, including an imposed teleological framework leading from Luther and the German Reformation to Germany’s unfortunate experience of modernity.67
Luther and Marxism
At roughly the same time of the Sonderweg debate, Luther studies also began to reflect the postwar Marxist vision of the German Democratic Republic (GDR), which reproduced in some sense a Whiggish, neo-Rankean picture of Luther and a national German Protestant movement.68 In the GDR’s early years “there were heroes and there were villains,” as one writer stated. “The hero, of course, was Thomas Müntzer [and] the villain, of course, was Martin Luther.”69 Depictions of Luther as a German political reactionary and “class traitor” were born largely from the European revolutions of 1848. In his famous 1850 work on the German Peasants’ War of 1524–1525, Friedrich Engels (1820–1895) pointed to Müntzer as a pioneer, leading the common people in their struggle for social justice, who was cruelly tortured and martyred for the movement.70 Engels decried Luther as a “lackey of the princes” (Fürstenknecht) and a “butcher of the peasants” (Bauernschlächter).71 His tract inspired the likes of August Bebel (1840–1913), Franz Mehring (1846–1919), Karl Kautsky (1854–1938), Ernst Bloch (1885–1977), and others, who rehabilitated Müntzer’s image as that of a courageous soul struggling against the overreaching powers of feudalism on behalf of Germany’s working peasants.72
Anniversaries of the Reformation in 1967, the Peasants’ War in 1975, and Luther’s birth in 1983 helped incite a remarkable outpouring of interest in Luther.73 The notion of an “early bourgeois revolution” (Frühbürgerliche Revolution), developed by the Leipzig historian Max Steinmetz (1912–1990), was central to the scholarly output. According to Steinmetz, Germany’s lower classes showed initial signs of an uprising in 1476, stirrings that included Luther’s stand against Rome in 1517 and trial at Worms in 1521 and culminated in the Peasants’ War in 1524–1524. “But while Müntzer led the peasants into battle, and even sacrificed his life, Luther had become a traitor. He not only sided with the reactionary feudal powers, this the terminology of Steinmetz and his friends, but Luther even issued pamphlets in which he denounced the peasants and their leaders.”74 The early bourgeois revolution reached its end in Münster in 1534–1535 with the defeat of the Anabaptists.
This new grand narrative would be revised and modified by historians in both the GDR and the Federal Republic of Germany (FRG). In the 1980s, after a period of Cold War lack of interest in the Reformation for many Germans, in which Luther no longer functioned as the reference point for defining cultural, religious, or psychological identity as he had during the government of Konrad Adenauer (1876–1967) in the West or the early tenure of Erich Honecker (1912–1994) in the East, Luther came to be considered once more as a major historical figure in his own right: a great, if flawed, forefather of Germany.75 Steinmetz himself had by then proclaimed that, together, the Reformation and the Peasants’ War constituted “the most significant revolutionary mass movement of the German people until the November Revolution of 1918.”76 Scholars like Gerhard Brendler, Adolf Laube, and Günter Vogler, among others in the GDR, increasingly following the scholarship produced by Reformation scholars in the West and elsewhere, helped inspire the turnabout in Luther’s reception.77 This was due in part to the pursuit of a policy of increased openness toward the GDR by Willy Brandt (1912–1992), chancellor of the FRG from 1969 to 1974, known as Brandt’s Ostpolitik.78 But it also owed to scholarly and political processes already set in motion. During the 1967 commemorations, Brendler observed that now “whoever affirms one [Müntzer or Luther] need not damn the other.”79
In both German states, the 1983 jubilee generated a massive bibliography, even more so than 1967 had. Both quantity and quality of the publications was impressive, and included, among so many others, a second edition of the opening volume of Martin Brecht’s now-standard three-volume biography of Luther.80 The celebrations arguably reached their high point in the International Luther Congress at Erfurt in August 1983.81 With the fall of the Berlin Wall, German reunification, and the many other complex changes from die Wende (1989–1990), some of this body of scholarship warrants re-examination.
Luther at Five Hundred and Counting
Recent decades have witnessed two dominant trends for understanding Luther and the Reformation, even if both no longer hold quite the same kind of explanatory power in the early years of the 21st century: the communalization thesis, associated at first with the likes of Peter Blickle, which has tended to see the German Reformation from the perspective of late medieval history; and the confessionalization thesis, formulated by Heinz Schilling and Wolfgang Reinhard, which has tended to view the German Reformation as a point of commencement for early modern German history.82 A vital concern for both approaches is the extent to which Luther’s activities in the 1520s constituted a rupture or a radical break (Umbruch), or whether they should be seen instead as part of a much lengthier process of progressive reform, highlighting continuity more than abrupt change. Behind both approaches is the seminal work of the German Protestant church historian Bernd Moeller, which integrated strands in political, social, and church history.83
For Luther scholarship, the way forward does not seem to depend on an either-or answer. One can study Luther without being forced to make a false choice between dramatic “ruptures” (Umbrüche) and long-term intellectual, religious, and political developments; the combination of both is decisive. In the 21st century, scholars such as Thomas Kaufmann, Schilling, and many others have produced new biographical studies in this vein.84 After decades of interest in the concept of confessionalization, to which the ecumenical context of the 1960s also contributed, a move back to the 1520s may once again be on the horizon, as the work of several excellent established German historians testifies. The 2017 jubilee has propelled scholarship to look anew at Luther and the early Reformation, and to emphasize the point that Luther did not exist alone in a vacuum, but alongside a number of critically important voices and forces of change.85
What is more, the importance of the German nation-state in the historiography is in a state of flux. If the debate over communalization and confessionalization (to which one should also add proto-industrialization) reflects differing conceptions of borders in time, the predominantly national focus on Luther and the German Reformation has in recent decades faced differing conceptions of borders in space. Scholars engaged in transnational and global history, along with national historians operating with a renewed focus on European integration, seek to change the line of sight. Attempting to free German history from national and nationalist accounts, Jürgen Osterhammel, among others, has argued that “national history is not the historiographical norm.”86 It may have emerged in the middle of the 18th century through the likes of David Hume and received new impetus from Ranke, but too often the defining national moments of the 20th century have been read back into the past.
Critical events in Germany’s 19th and 20th centuries have functioned like the vanishing point of visual artists, as Helmut Walser Smith has argued.87 Experiences such as the Battle of Nations in 1813, unification in 1871, or the commencement of the genocidal killings of the Holocaust in 1941 have structured the bigger historical picture of Germany, placing certain elements in the foreground and consigning others to the background. If this is true for renderings of German history writ large, it is no doubt true for German renderings of Luther. If the 19th century has been “held hostage by the twentieth,” then all the more so has the 16th-century Luther been “held hostage” by the political and cultural exigencies of nearly every age since Luther’s death in 1546.88
To come full circle, Ranke once wrote, “The purpose of a historian depends on his point of view.”89 Luther, of course, has certainly been no stranger to the cycles of historiography and ever-changing frames of reference. In sum, German historiography manifests nothing if not the perennial Luther.
Review of the Literature
Given the massive body of literature on Luther, one might reasonably expect a vast array of scholarship devoted to Luther’s position in German historiography, but that is only partially the case. Apart from the many biographies, most studies in general German historical scholarship touch on Luther obliquely in one of two ways: in the context of a broader discussion of the German Reformation or in view of a particular era such as the Kaiserreich or the GDR. The former explicitly follows the Rankean method. The latter reflects Ranke’s approach as well, but does so by concentrating more on shifts within and occasioned by particular historical thinkers.
The summaries provided by A. G. Dickens and John Tonkin, though dated, still offer some insights for understanding past paradigms for interpreting the Reformation.90 More recent developments are discussed in an insightful roundtable in German History (2014).91
Ernst Schulin’s essay from 1984 provides an excellent entry point into Luther’s position in these discussions.92 Regular additions to the ever-expanding bibliography on Luther can be gleaned from periodicals such as the ARG and its annual supplementary volume, Literaturbericht, which reviews new studies on Luther in particular and early modern religion and society in general. Other helpful accounts stem from the voluminous publications connected to the various Luther and Reformation jubilees, some of which are cited in the notes. The included entries by Kaufmann and Schilling also warrant mention as especially valuable.
Lehmann and Thomas A. Brady Jr. have by all accounts cast the most prolonged and penetrating look at Luther in German historiography. Many of Lehmann’s essays now appear together for the benefit of the reader: Luthergedächtnis (2012). This collection repays careful consideration in spades. Brady’s treatments are dispersed, though two may be highlighted here: the dialogue with Schilling sponsored by the German Historical Institute (2008) and the monograph German Histories in the Age of Reformations, 1400–1650 (2009).93
Bornkamm, Heinrich. Luther im Spiegel der deutschen Geistesgeschichte. 2d ed. Göttingen, Germany: Vandenhoeck & Ruprecht, 1970.Find this resource:
Brady, Thomas A., Jr.The Protestant Reformation in German History, with a Comment by Heinz Schilling. Washington, DC: German Historical Institute, 1998.Find this resource:
Brady, Thomas A., Jr.German Histories in the Age of Reformations, 1400–1650. New York: Cambridge University Press, 2009.Find this resource:
Bräuer, Siegfried. Martin Luther in marxistischer Sicht von 1945 bis zum Beginn der achtziger Jahre. Berlin: Evangelische Verlagsanstalt, 1983.Find this resource:
Brinks, J. H. “Einige Überlegungen zur politischen Instrumentalisierung Martin Luthers durch die deutsche Historiographie im 19. und 20. Jahrhundert,” Zeitgeschichte 22 (1995): 233–248.Find this resource:
Dickens, A. G., and John Tonkin. The Reformation in Historical Thought. Cambridge, MA: Harvard University Press, 1985.Find this resource:
“Forum: Religious History beyond Confessionalization,” German History 34 (2014): 579–598.Find this resource:
Foster, Karl, ed. Wandlungen des Lutherbildes. Würzburg, Germany: Echter Verlag, 1966.Find this resource:
Hamm, Berndt, Bernd Moeller, and Dorothea Wendebourg, eds. Reformationstheorien: Ein kirchenhistorischer Disput über Einheit undd Vielfalt der Reformation. Göttingen, Germany: Vandenhoeck & Ruprecht, 1995.Find this resource:
Iggers, Georg G.The German Conception of History: The National Tradition of Historical Thought from Herder to the Present. Rev. ed. Middletown, CT: Wesleyan University Press, 1983.Find this resource:
van Ingen, Ferdinand, and Gerd Labroisse, ed. Luther-Bilder im 20. Jahrhundert. Amsterdam: Rodopi, 1984.Find this resource:
Kaufmann, Thomas. Martin Luther. Munich: C. H. Beck, 2006.Find this resource:
Kaufmann, Thomas. Geschichte der Reformation. Frankfurt: Insel Verlag, 2009.Find this resource:
Kupisch, Karl. Von Luther zu Bismarck: Zur Kritik einer historischen Idee: Heinrich von Treitschke. Berlin: Verlag Haus und Schule, 1949.Find this resource:
Lehmann, Hartmut. Luthergedächtnis 1817 bis 2017. Göttingen, Germany: Vandenhoeck & Ruprecht, 2012.Find this resource:
Maser, Peter. “Mit Luther alles in Butter?”: Das Lutherjahr 1983 im Spiegel ausgewählter Akten. Berlin: Metropol, 2013.Find this resource:
Medick, Hans and Peer Schmidt, eds. Luther zwischen den Kulturen: Zeitgenossenschaft—Weltwirkung. Göttingen, Germany: Vandenhoeck & Ruprecht, 2004.Find this resource:
Müller, Johann Baptist, ed. Die Deutschen und Luther: Texte zur Geschichte und Wirkung. Stuttgart: Reclam, 1983.Find this resource:
Pelikan, Jaroslav. “Leopold von Ranke as Historian of the Reformation: What Ranke Did for the Reformation—and What the Reformation did for Ranke.” In Leopold von Ranke and the Shaping of the Historical Discipline, 89–98. Edited by Georg G. Iggers and James M. Powell. Syracuse, NY: Syracuse University Press, 1990.Find this resource:
Schilling, Heinz. Martin Luther: Rebell in einer Zeit des Umbruches. Munich: C. H. Beck, 2014.Find this resource:
Schilling, Heinz, ed. Der Reformator Martin Luther 2017: Eine wissenschaftliche und gedenkpolitische Bestandsaufnahme. Berlin: Walter de Gruyter, 2014.Find this resource:
Schulin, Ernst. “Luther’s Position in German History and Historical Writing.” Australian Journal of Politics and History 30 (1984): 85–98.Find this resource:
(1.) WA 15:46.18–21; LW 45:370.
(2.) WA 50:384; LW 34:276.
(3.) Jaroslav Pelikan, “Leopold von Ranke as Historian of the Reformation: What Ranke Did for the Reformation—and What the Reformation did for Ranke,” in Leopold von Ranke and the Shaping of the Historical Discipline, ed. Georg G. Iggers and James M. Powell (Syracuse, NY: Syracuse University Press, 1990), 90.
(4.) Thomas Nipperdey, “Luther und die Bildung der Deutschen,” in Luther und die Folgen: Beiträge zur sozialgeschichtlichen Bedeutung der lutherischen Reformation, ed. Hartmut Löwe and Claus-Jürgen Roepke (Munich: Chr. Kaiser, 1983), 27.
(5.) Thomas Nipperdey, German History from Napoleon to Bismarck 1800–1866, trans. Daniel Nollan (Princeton, NJ: Princeton University Press, 1996), 1.
(6.) Johann Wolfgang von Goethe to Karl Ludwig von Knebel, August 22, 1817, in Briefwechsel zwischen Goethe und Knebel (1774–1832), vol. 2 (Leipzig: Brockhaus, 1851), 229.
(7.) See the extant manuscript in Leopold von Ranke, Aus Werk und Nachlass, vol. 3: Frühe Schriften, ed. W. P. Fuchs (Munich: Oldenbourg, 1973), 329–446.
(8.) Carl Hinrichs, “Ranks Lutherfragment von 1817 und der Ursprung seiner univeralhistorischen Anschauung,” in Festschrift für Gerhard Ritter zu seinem 60. Geburtstag, ed. Richard Nürnberger (Tübingen, Germany: J. C. B. Mohr, 1950), 299–321; Gunter Berg, Leopold von Ranke als akademischer Lehrer: Studien zu seinen Vorlesungen und seinem Geschichtsdenken (Göttingen, Germany: Vandenhoeck & Ruprecht, 1968), 109–113.
(9.) See Carl Hinrichs, Ranke und die Geschichtstheologie der Goethezeit (Göttingen, Germany: Musterschmidt Wissenschaftlicher Verlag, 1954).
(10.) See, e.g., Lutz Winckler, Martin Luther als Bürger und Patriot: Das Reformationsjubiläum von 1817 und der politische Protestantismus des Wartburgfestes (Lübeck, Germany: Matthiesen Verlag, 1969).
(11.) Leopold von Ranke, Deutsche Geschichte im Zeitalter der Reformation, in Ranke, Sämmtliche Werke, vols. 1–6 (Berlin: Duncker & Humblot, 1867–1890).
(12.) Ranke, Deutsche Geschichte, 1:3
(13.) Ranke, Deutsche Geschichte, 6:112.
(14.) Ranke, Deutsche Geschichte, 4:5.
(15.) Ranke, Deutsche Geschichte, 1:332. Note the ambiguity of “spirit” (Geist) here, referring to the Holy Spirit, Luther’s spirit, or perhaps another alternative.
(16.) Ranke, Deutsche Geschichte, 1:289.
(17.) Quoted in Leonard Krieger, Ranke: The Meaning of History (Chicago: University of Chicago Press, 1977), 167.
(18.) Ranke, Deutsche Geschichte, 2:27.
(19.) Ranke, Deutsche Geschichte, 2:27–28.
(20.) Pelikan, “Leopold von Ranke,” 95.
(21.) Karl Kupisch, Von Luther zu Bismarck: Zur Kritik einer historischen Idee: Heinrich von Treitschke (Berlin: Verlag Haus & Schule, 1949).
(22.) Quoted in Werner Schuffenhauer and Klaus Steiner, eds., Martin Luther in der deutschen bürgerlichen Philosophie 1517–1845 (Berlin: Akademie-Verlag, 1983), 364.
(23.) Konrad Repgen, “Reform,” in The Oxford Encyclopedia of the Reformation, ed. Hans J. Hillerbrand, 4 vols. (New York: Oxford University Press, 1996), 3:392–395.
(24.) Fritz Fischer, Der Erste Weltkrieg und das deutsche Geschichtsbild: Beiträge zur Bewältigung eines historischen Tabus (Düsseldorf: Droste, 1977), 67.
(25.) Thomas A. Brady Jr., The Protestant Reformation in German History, with a Comment by Heinz Schilling (Washington, DC: German Historical Institute, 1998), 15. Cf. Gangolf Hübinger, Kulturprotestantismus und Politik: Zum Verhältnis von Liberalismus und Protestantismus im wilhelmischen Deutschland (Tübingen, Germany: Mohr, 1997). On the Society for Reformation History, see Luise Schorn-Schütte, ed., 125 Jahre Verein für Reformationsgeschichte (Gütersloh, Germany: Gütersloher Verlagshaus, 2008).
(26.) See Hartmut Lehmann, “Martin Luther als deutscher Nationalheld im 19. Jahrhundert,” Luther: Zeitschrift der Luther-Gesellschaft 55 (1984): 53–65.
(27.) Hermann Haering, “Über Treitschke und seine Religion,” in Aus Politik und Geschichte: Gedächtnisschrift für Georg von Below (Berlin: Deutsche Verlagsgesellschaft für Politik und Geschichte, 1928), 218–279.
(28.) Jonathan Steinberg, Bismarck: A Life (New York: Oxford University Press, 2011), 246.
(29.) Heinrich von Treitschke, “Luther und die deutsche Nation,” in Treitschke, Historische und politische Aufsätze, vol. 4 (Berlin: S. Hirzel, 1897), 393–394.
(30.) Treitschke, “Luther und die deutsche Nation,” 390–392, 387–388, 378–380, 384.
(31.) Martin A. Ruehl, The Italian Renaissance in the German Historical Imagination (Cambridge, U.K.: Cambridge University Press, 2015), 17–18.
(32.) Treitschke, “Luther und die deutsche Nation,” 386.
(33.) Treitschke, “Luther und die deutsche Nation,” 396.
(34.) Bernd Faulenbach, Ideologie des deutschen Weges: Die deutsche Geschichte in der Historiographie zwischen Kaiserreich und Nationalsozialismus (Munich: Beck, 1980), 125–131; Hans Heinz Krill, Die Rankerenaissance: Max Lenz und Erich Marcks: Ein Beitrag zum historisch-politischen Denken in Deutschland 1880–1935, vol. 3 (Berlin: Walter de Gruyter, 1962), 127–225.
(35.) Johannes Janssen to Georg Wehry, December 25, 1874, in Johannes Janssens Briefe, ed. Ludwig Pastor, vol. 2 (Freiburg im Breisgau, Germany: Herder, 1920), 16. On Janssen, see Andreas Holzem, “‘Die Cultur trennte die Völker nicht: sie einte und band’: Johannes Janssen (1829–1891) als europäischer Geschichtsschreiber der Deutschen?” in Die europäische Integration und die Kirchen II: Denker und Querdenker, ed. Irene Dingel and Heinz Duchhardt (Göttingen, Germany: Vandenhoeck & Ruprecht, 2012), 9–49.
(36.) J. J. Ignaz von Döllinger, Die Reformation, ihrer innere Entwicklung und ihre Wirkungen, 3 vols. (Regensburg, Germany: G. Joseph Manz, 1846–1848). On Catholic approaches to Luther, see Heinrich Lutz, “Zum Wandel der katholischen Lutherinterpretation,” in Objektivität und Parteilichkeit in der Geschichtswissenschaft, ed. Reinhart Koselleck, Wolfgang J. Mommsen, and Jörn Rüsen (Munich: Deutscher Taschenbuch-Verlag, 1977), 173–198.
(37.) Max Lenz, “Janssen’s Geschichte des deutschen Volkes, eine analytische Kritik,” Historische Zeitschrift 50 (1883): 231–284.
(38.) Max Lenz, Luther und die deutsche Geist: Rede zur Reformationsfeier 1917 in Hamburg (Hamburg: Broscheck, 1917); Lenz, Kleine historische Schriften, vol. 2: Von Luther zu Bismarck (Munich: Oldenbourg, 1920).
(39.) Erich Marcks, Luther und Deutschland: Eine Reformationsrede im Kriegsjahr 1917 (Leipzig: Quelle & Meyer, 1917), 1.
(40.) Lenz, Luther und die deutsche Geist, 7.
(41.) P. Heinrich Denifle and Albert Maria Weiss, Luther und das Luthertum, 2 vols. (Mainz, Germany: F. Kirchheim, 1904–1909).
(42.) Hartmann Grisar, Luther, 3 vols. (Freiburg im Breisgau, Germany: Herder, 1911–1912); Grisar, Martin Luthers Leben und sein Werk (Freiburg im Breisgau, Germany: Herder, 1926).
(43.) Ernst Schulin, “Luther’s Position in German History and Historical Writing,” Australian Journal of Politics and History 30 (1984): 91.
(44.) Brady, Protestant Reformation, 18.
(45.) See James M. Stayer, Martin Luther, German Saviour: German Evangelical Theological Factions and the Interpretation of Luther, 1917–1933 (Montreal: McGill-Queen’s University Press, 2000).
(46.) On Ritter, see, e.g., Christoph Cornelißen, Gerhard Ritter: Geschichtswissenschaft und Politik im 20. Jahrhundert (Düsseldorf: Droste, 2001), and Klaus Schwabe, “Gerhard Ritter—Werk und Person,” in Gerhard Ritter: Ein politischer Historiker in seinen Briefen (Boppard am Rhein, Germany: Boldt, 1984), 1–170.
(47.) Gerhard Ritter, Luther: Gestalt und Symbol (Munich: Bruckmann, 1925), 1.
(48.) On the Annales historical paradigm, see André Burguière, The Annales School: An Intellectual History, trans. Jane Marie Todd (Ithaca, NY: Cornell University Press, 2009).
(49.) Lucien Febvre, Un Destin: Martin Luther (Paris: Rieder, 1928), 146. Cf. Peter Schöttler, ed., Lucien Febvre: Martin Luther (Frankfurt: Campus Verlag, 1996)
(50.) Thomas A. Brady Jr., “Comment: Gerhard Ritter,” in Paths of Continuity: Central European Historiography from the 1930s to the 1950s, ed. Hartmut Lehmann and James Van Horn Melton (Cambridge, U.K.: Cambridge University Press, 1994), 112. Cf. Gerhard Ritter, Die Weltwirkung der Reformation, 4th ed. (Munich: Oldenbourg, 1975).
(51.) Gerhard Ritter, Heinrich Bornkamm, and Otto Scheel, “Zur Neugestaltung unserer Zeitschrift,” Archiv für Reformationsgeschichte 35 (1938): 1–7.
(52.) Cf. Hartmut Lehmann, “Heinrich Bornkamm im Spiegel seiner Lutherstudien von 1933 und 1947,” in Lehmann, Luthergedächtnis 1817 bis 2017 (Göttingen, Germany: Vandenhoeck & Ruprecht, 2012), 138–150; Lehmann, “Luther als Kronzeuge für Hitler: Anmerkungen zu Otto Scheels Lutherverständnis in den 1930er Jahren,” in Lehmann, Luthergedächtnis, 160–175.
(53.) Lehmann, Luthergedächtnis, 151.
(54.) William Montgomery McGovern, From Luther to Hitler: The History of Fascist-Nazi Political Philosophy (Boston: Houghton Mifflin, 1941); William L. Shirer, The Rise and Fall of the Third Reich: A History of Nazi Germany (New York: Simon & Schuster, 1960).
(55.) WA 53:417–552; LW 47:137–306.
(56.) Peter F. Wiener, Martin Luther: Hitler’s Spiritual Ancestor (London: Hutchinson, 1944). Cf. Gordon Rupp, Martin Luther: Hitler’s Cause or Cure? (London: Lutterworth, 1945).
(57.) Hans Preuß, Luther und Hitler: Als Beigabe—Luther und die Frauen (Erlangen, Germany: Freimund-Verlag, 1933).
(58.) Hans Asmussen, “Muß Luther nach Nürnberg?” Nordwestdeutsche 11–12 (1947): 31–37.
(59.) Thomas Mann, “Germany and the Germans,” in Mann, Thomas Mann’s Addresses Delivered at the Library of Congress, 1942–1949 (Washington, DC: Library of Congress, 1963), 52–53.
(60.) For an overview, see Jürgen Kocka, “German History before Hitler: The Debate about the German Sonderweg,” Journal of Contemporary History 23 (1988): 3–16.
(61.) See Harmut Lehmann and James J. Sheehan, An Interrupted Past: German-Speaking Refugee Historians in the United States after 1933 (Cambridge, U.K.: Cambridge University Press, 1991).
(62.) Fritz Stern, Five Germanys I Have Known (New York: Farrar, Straus & Giroux, 2006), 165, 203–204.
(63.) See, e.g., Susannah Heschel, The Aryan Jesus: Christian Theologians and the Bible in Nazi Germany (Princeton, NJ: Princeton University Press, 2008), 144–145. Cf. Uwe Siemon-Netto, The Fabricated Luther: Refuting Nazi Connections and Other Modern Myths, 2d ed. (Saint Louis, MO: Concordia, 2007), and Heiko A. Oberman, Wurzeln des Antisemitismus: Christenangst und Judenplage im Zeitalter von Humanismus und Reformation (Berlin: Severin & Siedler, 1981).
(64.) Cf. Fritz Fischer, Germany’s Aims in the First World War (New York: Norton, 1967).
(65.) Fritz Fischer, “Der deutsche Protestantismus und die Politik im 19. Jahrhundert,” Historische Zeitschrift 171 (1951): 473–518.
(66.) Hajo Holborn, A History of Modern Germany, 1840–1945 (Princeton, NJ: Princeton University Press, 1969), 739–740.
(67.) See the Sonderweg critique of David Blackbourn and Geoff Eley, The Peculiarities of German History (Oxford: Oxford University Press, 1984).
(68.) Cf. Martin Roy, Luther in der DDR: Zum Wandel des Lutherbildes in der DDR-Geschichtsschreibung: mit einer dokumentarischen Reproduktion (Bochum, Germany: Verlag Dieter Winkler, 2000).
(69.) Lehmann, Luthergedächtnis, 271–272.
(70.) Friedrich Engels, “Der deutsche Bauernkrieg,” in Gesamtausgabe (MEGA), by Karl Marx and Friedrich Engels, division 1, vol. 10 (Berlin: Dietz, 1977), 367–443.
(71.) Engels, “Der deutsche Bauernkrieg,” 385–386.
(72.) August Bebel, Der deutsche Bauernkrieg: Mit Berücksichtigung der hauptsächlichen sozialen Bewegungen des Mittelalters (Braunschweig, Germany: W. Bracke, 1876); Franz Mehring, Deutsche Geschichte vom Ausgange des Mittelalters: Ein Leitfaden für Lehrende und Lernende (Berlin: Vorwärts, 1910); Karl Kautsky, Vorläufer des neueren Sozialismus, vol. 2: Der Kommunismus in der deutschen Reformation, 7th ed. (Berlin: Dietz, 1923); Ernst Bloch, Thomas Münzer als Theolge der Reformation (Munich: Kurt Wolff, 1921).
(73.) See, e.g., Leo Stern and Max Steinmetz, eds., 450 Jahre Reformation (Berlin: VEB Deutscher Verlag der Wissenschaften, 1967).
(74.) Lehmann, Luthergedächtnis, 279.
(75.) Cf. Heinz Schilling, “Reformation—Umbruch oder Gipfelpunkt eines Temps des Réformes?” in Die frühe Reformation in Deutschland als Umbruch, ed. Bernd Moeller (Gütersloh, Germany: Gütersloher Verlagshaus, 1998), 13–34.
(76.) Max Steinmetz, “Theses on the Early Bourgeois Revolution in Germany, 1476–1535,” in The German Peasant War of 1525: New Viewpoints, ed. Bob Scribner and Gerhard Benecke (London: Unwin, 1979), 9.
(77.) Gerhard Brendler, Martin Luther: Theologie und Revolution; Eine marxistische Darstellung (Cologne: Pahl-Rugenstein, 1983); Günter Vogler, Siegfried Hoyer, and Adolf Laube, eds., Martin Luther: Leben, Werk, Wirkung (Berlin: Evangelische Verlagsanstalt, 1983).
(78.) See the thorough treatment in Peter Maser, “Mit Luther alles in Butter?”: Das Lutherjahr 1983 im Spiegel ausgewählter Akten (Berlin: Metropol, 2013).
(79.) Gerhard Brendler, “Reformation und Forschritt,” in Stern and Steinmetz, 450 Jahre Reformation, 67.
(80.) Martin Brecht, Martin Luther, 3 vols. (Stuttgart: Calwer, 1983–1987). The work appeared in English as Brecht, Martin Luther, trans. James L. Schaaf, 3 vols. (Minneapolis: Fortress, 1985–1993).
(81.) Cf. Alexander Fischer and Günther Heydemann, eds., Geschichtswissenschaft in der DDR, vol. 2: Vor- und Frühgeschichte bis Neueste Geschichte (Berlin: Duncker & Humblot, 1990).
(82.) See, e.g., Peter Blickle, Die Gemeindereformation: Die Menschen des 16. Jahrhunderts auf dem Weg zum Heil (Munich: Oldenbourg, 1987); Heinz Schilling, Konfessionskonflikt und Staatsbildung (Gütersloh, Germany: Gütersloher Verlagshaus, 1981); and Wolfgang Reinhard, “Konfession und Konfessionalisierung in Europa,” in Bekenntnis und Geschichte: die Confessio Augustana im historischen Zusammenhang, ed. Wolfgang Reinhard (Munich: Verlag Ernst Vögel, 1981), 165–189. Cf. Thomas A. Brady Jr., “Confessionalization: The Career of a Concept,” in Confessionalization in Europe, 1555–1700: Essays in Honor and Memory of Bodo Nischan, ed. John M. Headley, Hans J. Hillerbrand, and Anthony J. Papalas (Burlinton, VT: Ashgate, 2004), 1–20. See also the synthesis in Joel F. Harrington and Helmut Walser Smith, “Confessionalization, Community, and State-Building in Germany, 1555–1870,” Journal of Modern History 69 (1997): 77–101.
(83.) Bernd Moeller, Reichsstadt und Reformation: Neue Ausgabe, ed. Thomas Kaufmann (Tübingen, Germany: Mohr Siebeck, 2011). The first edition appeared in English as Moeller, Imperial Cities and the Reformation: Three Essays, ed. and trans. H. C. Erik Midelfort and Mark U. Edwards (Philadelphia: Fortress, 1972).
(84.) Heinz Schilling, Martin Luther: Rebell in einer Zeit des Umbruches (Munich: C. H. Beck, 2014); Thomas Kaufmann, Geschichte der Reformation (Frankfurt: Insel Verlag, 2009); Thomas Kaufmann, Martin Luther (Munich: C. H. Beck, 2006).
(85.) Heinz Schilling, ed., Der Reformator Martin Luther 2017: Eine wissenschaftliche und gedenkpolitische Bestandsaufnahme (Berlin: Walter de Gruyter, 2014).
(86.) Jürgen Osterhammel, “Transnationale Gesellschaftsgeschichte: Erweiterung oder Alternative?” Geschichte und Gesellschaft 27 (2001): 474. Cf. Helmut Walser Smith, “For a Differently Centered Central European History: Reflections on Jürgen Osterhammel, Geschichtswissenschaft jenseits des Nationalstaats,” Central European History 37 (2004): 115–136.
(87.) Helmut Walser Smith, The Continuities of German History: Nation, Religion, and Race across the Long Nineteenth Century (Cambridge, U.K.: Cambridge University Press, 2008), 13–38.
(88.) H. Glenn Penny, “The Fate of the Nineteenth Century in German Historiography,” Journal of Modern History 80 (2008): 82.
(89.) Leopold von Ranke, Geschichte der romanischen und germanischen Völker von 1494 bis 1514, vol. 1 (Leipzig: G. Reimer, 1824), iii.
(90.) A. G. Dickens and John Tonkin, eds., The Reformation in Historical Thought (Cambridge, MA: Harvard University Press, 1985).
(91.) “Forum: Religious History beyond Confessionalization,” German History 34 (2014): 579–598.
(92.) Schulin, “Luther’s Position.”
(93.) Lehmann, Luthergedächtnis; Thomas A. Brady Jr., The Protestant Reformation in German History, with a Comment by Heinz Schilling (Washington, DC: German Historical Institute, 1998); Thomas A. Brady Jr., German Histories in the Age of Reformations, 1400–1650 (New York: Cambridge University Press, 2009). |
Most distant galaxy ever seen spotted
London: The combined power of NASA’s Spitzer and Hubble Space Telescopes, as well as a cosmic magnification effect, has lead to the discovery of what could be considered as the most distant galaxy ever seen.
Light from the young galaxy captured by the orbiting observatories was emitted when our 13.7-billion-year-old universe was just 500 million years old.
The far-off galaxy existed within an important era when the universe just emerged from the so-called cosmic Dark Ages. During this period, the universe went from a dark, starless expanse to a recognizable cosmos full of galaxies. The discovery of the faint, small galaxy therefore opens up a window into the deepest, remotest epochs of cosmic history.
“This galaxy is the most distant object we have ever observed with high confidence,” said lead author Wei Zheng of Johns Hopkins University.
“Future work involving this galaxy -- as well as others like it that we hope to find -- will allow us to study the universe’s earliest objects and how the Dark Ages ended,” Zheng noted.
Light from the primordial galaxy traveled approximately 13.2 billion light-years before reaching NASA’s telescopes. In other words, the starlight snagged by Hubble and Spitzer left the galaxy when the universe was just 3.6 percent of its present age. Technically speaking, the galaxy has a redshift, or “z,” of 9.6. (Redshift is a term used by astronomers to mark cosmic distances by denoting how much an object’s light has shifted into shorter wavelengths due to the expansion of the universe.)
Unlike previous detections of this epoch’s galaxy candidates, which were only glimpsed in a single colour, or waveband, this newfound galaxy has been seen in five different wavebands.
To catch sight of these early, distant galaxies, astronomers rely on “gravitational lensing.” In this phenomenon, predicted by Albert Einstein a century ago, the gravity of foreground objects warps and magnifies the light from background objects. A massive galaxy cluster situated between our galaxy and the newfound, early galaxy magnified the latter’s light, brightening the remote object some 15 times and bringing it into view.
Based on the Hubble and Spitzer observations, astronomers think the distant galaxy is less than 200 million years old. It is also small and compact, containing only about one percent of the Milky Way’s mass.
According to leading cosmological theories, the first galaxies should indeed have started out tiny. They then progressively merged, eventually accumulating into the sizable galaxies of the more modern universe.
“These first galaxies likely played the dominant role in the epoch of reionization, the event that signaled the end of the universe’s Dark Ages. In essence, the light was finally able to penetrate the fog of the universe,” said Carnegie’s Daniel Kelson.
About 400,000 years after the Big Bang, neutral hydrogen gas formed from cooling particles. The first luminous stars and their host galaxies, however, did not emerge until a few hundred million years later. The energy released by the earliest galaxies is thought to have caused the neutral hydrogen strewn throughout the universe to ionize, or lose an electron, a state that the gas has remained in since that time.
Astronomers plan to study the rise of the first stars and galaxies and the epoch of reionization with the successor to both the Hubble and Spitzer telescopes, NASA’s James Webb Telescope, slated for launch in 2018.
The newly described, distant galaxy will likely be a prime target given the fortuitousness of it being so strongly gravitationally lensed.
Their work has been published September 20 by Nature.
More from India
More from World
More from Sports
More from Entertaiment
- DNA: Analysis of problems of unemployement in India
- DNA: Analysis of how car makers are playing with people's life
- DNA: Analysis of increasing trade of terror, laudanum and fake currency in Malda
- DNA: Analysis of increasing trade of terror, laudanum and fake currency in Malda- Part II
- DNA: A plane landed at -100 degree celsius to rescue a worker
- Sterling, stocks skid as Brexit result agonisingly close
- REVEALED: Anil Kumble's first message to India captains Mahendra Singh Dhoni, Virat Kohli after appointment
- 'Mirzya' trailer alert! Harshvardhan Kapoor spells enchanting cast in a splendid tale by Rakeysh Omprakash Mehra
- RSS chief Mohan Bhagwat to share dais with Leonardo DiCaprio in UK: Reports
- Oops! Salman Khan mocks Hollywood return Priyanka Chopra and Deepika Padukone, but WHY? |
1/2 Overview - Term 2, 2021
In Term 2 Reading, our Year 1/2 students have had a focus on Searching for and Using Information to make meaning as they read. Using key information found in the text and artwork, students have been able to make connections, visualise and critique. They have developed skills in summarising a text. This involved reading, making meaning and condensing a text into a main idea and supportive evidence.
In Term 2 Writing, our Year 1/2 students have continued to learn about the 6+1 traits of writing (ideas, word choice, organisation, sentence fluency, conventions, voice and presentation). We have had a key focus on the traits of conventions, sentence fluency and organisation. These traits enabled students to create texts with a clear message for the reader.
In Term 2 Maths, our 1/2 students have developed their understanding of 2D shapes. They have named and drawn the shapes and identified the features including corners and sides. They have also explored 3D objects where they have named and made models of the objects and identified the features of 3D objects, including the number of faces, edges and vertices. Students have also developed an understanding of the value of a digit in accordance to its place in a number. They have used various materials to make a model of a number, partition a number and have begun to use mental strategies to solve addition and subtraction problems.
During Term 2 Inquiry students undertook an investigation into an aspect of science involving change. Students created a deep dive question in relation to the overarching topic of ‘why do things on Earth change?’ Students developed skills in questioning, researching, predicting, communicating and evaluating. Students engaged in a variety of methods to share their knowledge and to teach their peers about their chosen topic. |
Scalar potential of a point charge
By Glenn Decker
Like a shock wave from a supersonic aircraft, synchrotron radiation is emitted as an expanding wave front radiates outward and away from a charged particle as it suddenly transitions from a curved to a straight trajectory. To an observer near the top of the figure, the arrival of the wave front corresponds to a huge electromagnetic impulse of X-ray radiation, similar to a sonic boom. Using devices called undulators at the Advanced Photon Source, the intensity of these impulses can be further enhanced, resulting in very intense X-ray sources.
Scalar potential of a point charge shortly after exiting a dipole magnet, moving left to right. One can see the synchrotron radiation wave front pull away from the electron (actually positron, since the scalar potential is positive). It's only moving 0.9 times the speed of light though. Otherwise the height of the wavefront diverges towards infinity while its width shrinks to zero. The observer is moving along with the positron, which is why it stays in the middle. |
Computer vision enables images, or sequences of images, to be processed by a computer using algorithms. There are many aspects to computer vision, including mathematics, imaging hardware, imaging software, physics (especially optics), signal processing and artificial intelligence. It is therefor evident that computer-vision has a lot of overlap with image-processing.
Some basic techniques used in computer vision are:
- Image acquisition
- Feature Extraction
- High-Level processing
- Decision making |
The experimental approach has shown great success in animal studies, increasing the survival rate of mice with a deadly melanoma from 0 to 90 percent. The implant could also be used to treat diseases of the immune system such as arthritis and diabetes, and, potentially, to train other kinds of cells, including stem cells used to repair damage to the body.
Currently when dendrtic cells are trained outside the body most of them died when transplanted.
First, it attracts dendritic cells by releasing a kind of chemical signal called a cytokine. Once the cells are there, they take up temporary residence inside spongelike holes within the polymer, allowing time for the cells to become highly active.
The polymer carries two signals that serve to activate dendritic cells. In addition to displaying cancer-specific antigens to train the dendritic cells, it is also covered with fragments of DNA, the sequence of which is typical of bacteria. When cells grab on to these fragments, they become highly activated. "This makes the cells think they're in the midst of infection," Mooney explains. "Frequently, the things you can do to cells are transient--especially in cancer, where tumors prevent the immune system from generating a strong response." This extra irritant was necessary to generate a strong response, the Harvard researchers found.
When implanted just under the skin of mice carrying a deadly form of melanoma, the polymer increased their survival rate to about 90 percent. By contrast, conventional immunotherapies that require treating the cells outside the body are 60 percent effective, says Mooney.
Mooney developed the polymer systems with more than melanoma in mind, however. He hopes to develop similar implants for treating other types of cancer, which should simply be a matter of changing the antigen carried by the polymer. But the approach could also be used to treat other kinds of immune disorders. For example, different chemical signals could dampen immune cells' activity in order to prevent transplant rejections and treat autoimmune diseases such as type 1 diabetes and rheumatoid arthritis, which result when the immune system attacks normal tissues. Mooney also hopes that the polymer system can train a different class of cells altogether. Just as fragile dendritic cells seem to respond better to being trained inside the body, this might be a more effective way to recruit and reprogram stem cells.
If proved in people, the cell-training polymers might also bypass some of the regulatory hurdles and expense faced by cell therapies, since devices are more readily approved by the Food and Drug Administration. Indeed, Mooney predicts that the therapy will move quickly through safety tests in large animals (the next step before human trials), and he expects to bring the cancer immunotherapy to clinical trials soon. "All the components are widely used and tested, and shown to be safe," he says. |
Study and academic expertise take on new meaning in the farm environment. Students work to acquire expertise in science, technology, and communication-the arenas of 21st century contribution. They strive to understand our time in history and to move the story of human beings toward a new, more successful chapter. They learn about other cultures to expand their world perspective, to explore different ways of thinking, to tackle relevant issues. They are students of their time and place in history so they can understand the planet we all live on and the people who are our global family.
Mathematics: Discovery and Skill
Mathematical thinking is a gift to humans at birth, as all humans possess mathematical minds, claims Dr. Montessori. Adolescent mathematical growth depends on how it is used-in a social setting and for purposeful work in the community. When everyone values, uses and performs math in a variety of situations–the community benefits from problem-solving and creative thinking as a way of life.
Studying mathematics on the farm occurs during every project and business venture as well as in extended lab sessions. Students engage in:
Students have the opportunity to move through Algebra, Geometry, and Algebra 2 as they are ready. Hands-on math projects related to projects on the farm-like cost analysis of garden and animal production-are interwoven with daily math work.
- Warm-up mental math exercises
- Formal lessons
- Group projects that require data analysis and mathematical problem-solving
- Applied projects related to farm tasks and business ventures
- Individually-paced follow-up work
- Individually designed skill work
Science and Stewardship
Dr. Montessori suggested that working and studying on the land would provide "limitless opportunities for scientific and historic studies." Science is studied and applied to real work through student-owned projects called Occupations. Each project combines practical tasks with academic study.
A small group of 6-10 students take on challenges necessary to running of the farm's businesses. Required background expertise and skills are learned to accomplish the task. The group also accepts responsibility for managing their area of the farm using acquired expertise.
- Acquiring and raising pigs for meat
- Assessing the woodlot for hardwood lumber harvest
- Producing and selling maple syrup
- Preserving produce from the organic gardens
- Monitoring water quality and the operation of the waste treatment plant
- Planning menus and educating the community on nutrition |
Polar Coordinates and Complex Numbers
PhD. in Mathematics
Norm was 4th at the 2004 USA Weightlifting Nationals! He still trains and competes occasionally, despite his busy schedule.
The most basic method of graphing polar equations is by plotting points and doing a quick sketch. Graphing polar equations is a skill that requires the ability to plot points and sometimes recognize a special case of polar curves, such as cardioids, and roses and conic sections. However, we need to understand the polar coordinate system and how to plot points for graphing polar equations.
Let's graph a polar equation. I have a pretty easy polar equation here. r equals theta over pi for theta greater than or equal to 0. Now the best approach when you're trying to graph something new is to plot some points. And so let's start with theta equals 0. If theta equals 0 r=0 so that's going to be a point. And let's try multiples of pi over 4. So when theta equals pi over 4, I get pi over 4 divided by pi which is a quarter. Pi over 2. Pi over 2 divided by pi is a half.
Let's try 3 pi over 4. 3 pi over 4 divided by pi is three quarters. And you can kind of see the pattern. The number I'm going to get here is basically this number without the pi, right? Divide out the pi. So pi will give me 1 and so on. Let me plot some of these points, and see what kind of a curve I'm getting.
So I have 0 0, right? r is 0, theta could be anything as long as r is 0, that gives me the origin. And then pi over 4, one quarter. Now I've made this so that each of these each of these circles represent a quarter unit. So pi over 4 one quarter is right here. And then pi over 2 this direction one half is right here. 3 pi over 4 gives me three quarters here. Negative pi gives me 4 quarters or 1 and following in this pattern if we wanted to keep going, 5 pi over 4 would give me 5 quarters. 1, 2, 3, 4, 5. 3 pi over 2 is 6 pi over 4. So I go 1, 2, 3, 4, 5, 6 and then 7 pi over 4 would give me 7 quarters. 1, 2, 3, 4, 5, 6, 7. And finally, let's just finish at 2 pi. 2 pi is the same as 8 pi over 4. So I go out to 2, right? 8 quarters.
Alright, let's see if if we can draw this. It looks kind of like a spiral. And just about done. There and the graph will continue forever, right? So it's just spirals around and around. this is the equation, the graph of the equation r equals theta over pi, for theta greater than or equal to 0. |
The perfect fifth or diapente is a musical interval which is responsible for the most consonant, or stable, harmony outside of the unison and octave. It is a valuable interval in chord structure, song development, and western tuning systems. The prefix perfect identifies it as belonging to the group of perfect intervals (Perfect fourth, Perfect octave) so called because of their extremely simple pitch relationships resulting in a high degree of consonance.
The perfect fifth is historically relevant because it is the first accepted harmony (besides the octave) of Gregorian chant, a very early formal style of musical composition. The perfect fifth occurs on the root of all major and minor chords (triads) and their extensions. It is one of three musical intervals that span five diatonic scale degrees; the others being the diminished fifth, which is one chromatic semitone smaller, and the augmented fifth, which is one chromatic semitone larger. The Solfege of the perfect fifth is "Do - So". A helpful way to recognize a perfect fifth is to hum the starting of twinkle twinkle little star, which is a familiar perfect 5th. The perfect fifth is abbreviated as P5 and its inversion is the perfect fourth.
In simple terms a perfect fifth can be played on a piano keyboard by holding down two notes, one of which is the seventh note higher than the base note.
Use in chords
The perfect fifth is a basic element in the construction of major and minor triads, and because these chords occur frequently in much music, the perfect fifth interval occurs just as often. However, due to its high level of consonance, the perfect fifth contributes very little to the overall harmonic effect of any chords containing it (except power chords). Because of this, in any situation that necessitates the omission of notes from a chord (such as for practical reasons of fingering) the note forming the perfect fifth above the chord's root can often be safely omitted, its absence being barely, if at all, noticeable.
A bare fifth, open fifth or empty fifth is a chord containing only a perfect fifth with no third. The closing chord of Mozart's Requiem is an example of a piece ending on an empty fifth, though these "chords" are common in Christian Sacred Harp singing and throughout rock music, especially hard rock, metal, and punk music, where overdriven or distorted guitar can make thirds sound muddy, and fast chord-based passages are made easier to play by combining the four most common guitar hand shapes into one. Rock musicians refer to them as power chords and often include octave doubling (i.e. their bass note is doubled one octave higher, e.g. F3-C4-F4).
Use in tuning and tonal systems
A perfect fifth in just intonation, a just fifth, corresponds to a pitch ratio of 3:2, while in 12-tone equal temperament, a perfect fifth is equal to seven semitones, or 700 cents, which is equivalent to a pitch ratio of 1:27/12 (approximately 1.4983), about two cents smaller than the just fifth. Because the pitch ratio is a ratio of small numbers, the perfect fifth is harmonically significant.
The circle of fifths is a model of pitch space for the chromatic scale (chromatic circle) which considers nearness not as adjacency but as the number of perfect fifths required to get from one note to another.
See also
External link
|This article was started using a Wikipedia fifth article| |
Here is an explanations due to Daniel F. Styer, Prof Physics at Oberlin Daniel's original is at https://mail.google.com/mail/?ui=2&view=bsp&ver=ohhl4rw8mbn4". He uses general relativity and the equivalence principle. The equivalence principle is not entirely true -- it IS possible to distinguish between gravity and acceleration -- but Daniel says that it is good enough for this purpose, that no one has ever succeeded in measuring the difference. Consider an accelerating space ship and two clocks. Clock T is in the tail and clock N is in the nose. Each clock sends out a signal once a second. The situation is not symmetric. Clock N measures that clock T's signals come more than one second apart, and clock T measures clock N's signals as closer than a second apart. Both clocks agree that T is slower: no paradox. What's neat about this is that this difference depends on the distance between T and N. The further apart, the greater the differential in speed. This is what you need to get agreement with the special relativity equations: it depends directly on the distance between the clocks. When the two clocks are reunited the T clock will be behind the N clock by the appropriate amount. |
An alphabet is a standard set of letters (basic written symbols or graphemes) which is used to write one or more languages based on the general principle that the letters represent phonemes (basic significant sounds) of the spoken language. This is in contrast to other types of writing systems, such as syllabaries (in which each character represents a syllable) and logographies (in which each character represents a word, morpheme, or semantic unit).
According to a terminology introduced by Peter T. Daniels, an "alphabet" in the narrow sense is one that represents both vowels and consonants as letters equally. The first "true alphabet" in this sense was the Greek alphabet, which was developed on the basis of the earlier Phoenician alphabet. In other alphabetic scripts, such as the original Phoenician, Hebrew or Arabic, letters predominantly or exclusively represent only consonants; such a script is also called an abjad. A third type, called abugida or alphasyllabary, is one where vowels are shown by diacritics or modifications of consonantal base letters, as in Devanagari and other South Asian scripts.
There are dozens of alphabets in use today, the most popular being the Latin alphabet (which was derived from the Greek). Many languages use modified forms of the Latin alphabet, with additional letters formed using diacritical marks. While most alphabets have letters composed of lines (linear writing), there are also exceptions such as the alphabets used in Braille, fingerspelling, and Morse code.
Alphabets are usually associated with a standard ordering of their letters. This makes them useful for purposes of collation, specifically by allowing words to be sorted in alphabetical order. It also means that their letters can be used as an alternative method of "numbering" ordered items, in such contexts as numbered lists.
The English word alphabet came into Middle English from the Late Latin word alphabetum, which in turn originated in the Greek ἀλφάβητος (alphabētos), from alpha and beta, the first two letters of the Greek alphabet. Alpha and beta in turn came from the first two letters of the Phoenician alphabet, and originally meant ox and house respectively.
African And Middle Eastern scripts
The history of the alphabet started in ancient Egypt. By the 27th century BC Egyptian writing had a set of some 24 hieroglyphs which are called uniliterals, to represent syllables that begin with a single consonant of their language, plus a vowel (or no vowel) to be supplied by the native speaker. These glyphs were used as pronunciation guides for logograms, to write grammatical inflections, and, later, to transcribe loan words and foreign names.
In the Middle Bronze Age an apparently "alphabetic" system known as the Proto-Sinaitic script appears in Egyptian turquoise mines in the Sinai peninsula dated to circa the 15th century BC, apparently left by Canaanite workers. In 1999, John and Deborah Darnell discovered an even earlier version of this first alphabet at Wadi el-Hol dated to circa 1800 BC and showing evidence of having been adapted from specific forms of Egyptian hieroglyphs that could be dated to circa 2000 BC, strongly suggesting that the first alphabet had been developed circa that time. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels. An alphabetic cuneiform script with 30 signs including three which indicate the following vowel was invented in Ugarit before the 15th century BC. This script was not used after the destruction of Ugarit.
The Proto-Sinaitic script eventually developed into the Phoenician alphabet, which is conventionally called "Proto-Canaanite" before ca. 1050 BC. The oldest text in Phoenician script is an inscription on the sarcophagus of King Ahiram. This script is the parent script of all western alphabets. By the tenth century two other forms can be distinguished namely Canaanite and Aramaic. The Aramaic gave rise to Hebrew. The South Arabian alphabet, a sister script to the Phoenician alphabet, is the script from which the Ge'ez alphabet (an abugida) is descended. Vowelless alphabets, which are not true alphabets, are called abjads, currently exemplified in scripts including Arabic, Hebrew, and Syriac. The omission of vowels was not a satisfactory solution and some "weak" consonants were used to indicate the vowel quality of a syllable (matres lectionis). These had dual function since they were also used as pure consonants.
The Proto-Sinatic or Proto Canaanite script and the Ugaritic script were the first scripts with limited number of signs, in contrast to the other widely used writing systems at the time, Cuneiform, Egyptian hieroglyphs, and Linear B. The Phoenician script was probably the first phonemic script and it contained only about two dozen distinct letters, making it a script simple enough for common traders to learn. Another advantage of Phoenician was that it could be used to write down many different languages, since it recorded words phonemically.
The script was spread by the Phoenicians, across the Mediterranean. In Greece, the script was modified to add the vowels, giving rise to the ancestor of all alphabets in the West. The indication of the vowels is the same way as the indication of the consonants, therefore it was the first true alphabet. The Greeks took letters which did not represent sounds that existed in Greek, and changed them to represent the vowels. The vowels are significant in the Greek language, and the syllabical Linear B script which was used by the Mycenaean Greeks from the 16th century BC had 87 symbols including 5 vowels. In its early years, there were many variants of the Greek alphabet, a situation which caused many different alphabets to evolve from it.
The Greek alphabet, in its Euboean form, was carried over by Greek colonists to the Italian peninsula, where it gave rise to a variety of alphabets used to write the Italic languages. One of these became the Latin alphabet, which was spread across Europe as the Romans expanded their empire. Even after the fall of the Roman state, the alphabet survived in intellectual and religious works. It eventually became used for the descendant languages of Latin (the Romance languages) and then for most of the other languages of Europe.
Some adaptations of the Latin alphabet are augmented with ligatures, such as æ in Old English and Icelandic and Ȣ in Algonquian; by borrowings from other alphabets, such as the thorn þ in Old English and Icelandic, which came from the Futhark runes; and by modifying existing letters, such as the eth ð of Old English and Icelandic, which is a modified d. Other alphabets only use a subset of the Latin alphabet, such as Hawaiian, and Italian, which uses the letters j, k, x, y and w only in foreign words.
Another notable script is Elder Futhark, which is believed to have evolved out of one of the Old Italic alphabets. Elder Futhark gave rise to a variety of alphabets known collectively as the Runic alphabets. The Runic alphabets were used for Germanic languages from AD 100 to the late Middle Ages. Its usage is mostly restricted to engravings on stone and jewelry, although inscriptions have also been found on bone and wood. These alphabets have since been replaced with the Latin alphabet, except for decorative usage for which the runes remained in use until the 20th century.
The Old Hungarian script is a contemporary writing system of the Hungarians. It was in use during the entire history of Hungary, albeit not as an official writing system. From the 19th century it once again became more and more popular.
The Glagolitic alphabet was the initial script of the liturgical language Old Church Slavonic and became, together with the Greek uncial script, the basis of the Cyrillic script. Cyrillic is one of the most widely used modern alphabetic scripts, and is notable for its use in Slavic languages and also for other languages within the former Soviet Union. Cyrillic alphabets include the Serbian, Macedonian, Bulgarian, and Russian alphabets. The Glagolitic alphabet is believed to have been created by Saints Cyril and Methodius, while the Cyrillic alphabet was invented by the Bulgarian scholar Clement of Ohrid, who was their disciple. They feature many letters that appear to have been borrowed from or influenced by the Greek alphabet and the Hebrew alphabet.
Beyond the logographic Chinese writing, many phonetic scripts are in existence in Asia. The Arabic alphabet, Hebrew alphabet, Syriac alphabet, and other abjads of the Middle East are developments of the Aramaic alphabet, but because these writing systems are largely consonant-based they are often not considered true alphabets.
Most alphabetic scripts of India and Eastern Asia are descended from the Brahmi script, which is often believed to be a descendant of Aramaic.
In Korea, the Hangul alphabet was created by Sejong the Great. Hangul is a unique alphabet: it is a featural alphabet, where many of the letters are designed from a sound's place of articulation (P to look like the widened mouth, L to look like the tongue pulled in, etc.); its design was planned by the government of the day; and it places individual letters in syllable clusters with equal dimensions, in the same way as Chinese characters, to allow for mixed-script writing (one syllable always takes up one type-space no matter how many letters get stacked into building that one sound-block).
Zhuyin (sometimes called Bopomofo) is a semi-syllabary used to phonetically transcribe Mandarin Chinese in the Republic of China. After the later establishment of the People's Republic of China and its adoption of Hanyu Pinyin, the use of Zhuyin today is limited, but it's still widely used in Taiwan where the Republic of China still governs. Zhuyin developed out of a form of Chinese shorthand based on Chinese characters in the early 1900s and has elements of both an alphabet and a syllabary. Like an alphabet the phonemes of syllable initials are represented by individual symbols, but like a syllabary the phonemes of the syllable finals are not; rather, each possible final (excluding the medial glide) is represented by its own symbol. For example, luan is represented as ㄌㄨㄢ (l-u-an), where the last symbol ㄢ represents the entire final -an. While Zhuyin is not used as a mainstream writing system, it is still often used in ways similar to a romanization system—that is, for aiding in pronunciation and as an input method for Chinese characters on computers and cellphones.
European alphabets, especially Latin and Cyrillic, have been adapted for many languages of Asia. Arabic is also widely used, sometimes as an abjad (as with Urdu and Persian) and sometimes as a complete alphabet (as with Kurdish and Uyghur).
The term "alphabet" is used by linguists and paleographers in both a wide and a narrow sense. In the wider sense, an alphabet is a script that is segmental at the phoneme level—that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish "true" alphabets from two other types of segmental script, abjads and abugidas. These three differ from each other in the way they treat vowels: abjads have letters for consonants and leave most vowels unexpressed; abugidas are also consonant-based, but indicate vowels with diacritics to or a systematic graphic modification of the consonants. In alphabets in the narrow sense, on the other hand, consonants and vowels are written as independent letters. The earliest known alphabet in the wider sense is the Wadi el-Hol script, believed to be an abjad, which through its successor Phoenician is the ancestor of modern alphabets, including Arabic, Greek, Latin (via the Old Italic alphabet), Cyrillic (via the Greek alphabet) and Hebrew (via Aramaic).
Examples of present-day abjads are the Arabic and Hebrew scripts; true alphabets include Latin, Cyrillic, and Korean hangul; and abugidas are used to write Tigrinya, Amharic, Hindi, and Thai. The Canadian Aboriginal syllabics are also an abugida rather than a syllabary as their name would imply, since each glyph stands for a consonant which is modified by rotation to represent the following vowel. (In a true syllabary, each consonant-vowel combination would be represented by a separate glyph.)
All three types may be augmented with syllabic glyphs. Ugaritic, for example, is basically an abjad, but has syllabic letters for /ʔa, ʔi, ʔu/. (These are the only time vowels are indicated.) Cyrillic is basically a true alphabet, but has syllabic letters for /ja, je, ju/ (я, е, ю); Coptic has a letter for /ti/. Devanagari is typically an abugida augmented with dedicated letters for initial vowels, though some traditions use अ as a zero consonant as the graphic base for such vowels.
The boundaries between the three types of segmental scripts are not always clear-cut. For example, Sorani Kurdish is written in the Arabic script, which is normally an abjad. However, in Kurdish, writing the vowels is mandatory, and full letters are used, so the script is a true alphabet. Other languages may use a Semitic abjad with mandatory vowel diacritics, effectively making them abugidas. On the other hand, the Phagspa script of the Mongol Empire was based closely on the Tibetan abugida, but all vowel marks were written after the preceding consonant rather than as diacritic marks. Although short a was not written, as in the Indic abugidas, one could argue that the linear arrangement made this a true alphabet. Conversely, the vowel marks of the Tigrinya abugida and the Amharic abugida (ironically, the original source of the term "abugida") have been so completely assimilated into their consonants that the modifications are no longer systematic and have to be learned as a syllabary rather than as a segmental script. Even more extreme, the Pahlavi abjad eventually became logographic. (See below.)
Thus the primary classification of alphabets reflects how they treat vowels. For tonal languages, further classification can be based on their treatment of tone, though names do not yet exist to distinguish the various types. Some alphabets disregard tone entirely, especially when it does not carry a heavy functional load, as in Somali and many other languages of Africa and the Americas. Such scripts are to tone what abjads are to vowels. Most commonly, tones are indicated with diacritics, the way vowels are treated in abugidas. This is the case for Vietnamese (a true alphabet) and Thai (an abugida). In Thai, tone is determined primarily by the choice of consonant, with diacritics for disambiguation. In the Pollard script, an abugida, vowels are indicated by diacritics, but the placement of the diacritic relative to the consonant is modified to indicate the tone. More rarely, a script may have separate letters for tones, as is the case for Hmong and Zhuang. For most of these scripts, regardless of whether letters or diacritics are used, the most common tone is not marked, just as the most common vowel is not marked in Indic abugidas; in Zhuyin not only is one of the tones unmarked, but there is a diacritic to indicate lack of tone, like the virama of Indic.
The number of letters in an alphabet can be quite small. The Book Pahlavi script, an abjad, had only twelve letters at one point, and may have had even fewer later on. Today the Rotokas alphabet has only twelve letters. (The Hawaiian alphabet is sometimes claimed to be as small, but it actually consists of 18 letters, including the ʻokina and five long vowels. However, Hawaiian Braille has only 13 letters.) While Rotokas has a small alphabet because it has few phonemes to represent (just eleven), Book Pahlavi was small because many letters had been conflated—that is, the graphic distinctions had been lost over time, and diacritics were not developed to compensate for this as they were in Arabic, another script that lost many of its distinct letter shapes. For example, a comma-shaped letter represented g, d, y, k, or j. However, such apparent simplifications can perversely make a script more complicated. In later Pahlavi papyri, up to half of the remaining graphic distinctions of these twelve letters were lost, and the script could no longer be read as a sequence of letters at all, but instead each word had to be learned as a whole—that is, they had become logograms as in Egyptian Demotic.
The largest segmental script is probably an abugida, Devanagari. When written in Devanagari, Vedic Sanskrit has an alphabet of 53 letters, including the visarga mark for final aspiration and special letters for kš and jñ, though one of the letters is theoretical and not actually used. The Hindi alphabet must represent both Sanskrit and modern vocabulary, and so has been expanded to 58 with the khutma letters (letters with a dot added) to represent sounds from Persian and English. Thai has a total of 59 symbols, consisting of 44 consonants, 13 vowels and 2 syllabics, not including 4 diacritics for tone marks and one for vowel length.
The largest known abjad is Sindhi, with 51 letters. The largest alphabets in the narrow sense include Kabardian and Abkhaz (for Cyrillic), with 58 and 56 letters, respectively, and Slovak (for the Latin script), with 46. However, these scripts either count di- and tri-graphs as separate letters, as Spanish did with ch and ll until recently, or uses diacritics like Slovak č. The largest true alphabet where each letter is graphically independent is Georgian, with 33 letters.
Syllabaries typically contain 50 to 400 glyphs, and the glyphs of logographic systems typically number from the many hundreds into the thousands. Thus a simple count of the number of distinct symbols is an important clue to the nature of an unknown script.
The Armenian alphabet (Armenian: Հայոց գրեր Hayots grer or Հայոց այբուբեն Hayots aybuben) is a graphically unique alphabetical writing system that has been used to write the Armenian language. It was introduced by Mesrob Mashdots around 405 AD, an Armenian linguist and ecclesiastical leader, and originally contained 36 letters. Two more letters, օ (o) and ֆ (f), were added in the Middle Ages. During the 1920s orthography reform, a new letter և (capital ԵՎ) was added, which was a ligature before ե+ւ, while the letter Ւ ւ was discarded and reintroduced as part of a new letter ՈՒ ու (which was a digraph before).
The Armenian word for "alphabet" is այբուբեն aybuben (Armenian pronunciation: [ɑjbubɛn]), named after the first two letters of the Armenian alphabet Ա այբ ayb and Բ բեն ben. The Armenian script's directionality is horizontal left-to-right, like the Latin and Greek alphabets.
Alphabets often come to be associated with a standard ordering of their letters, which can then be used for purposes of collation – namely for the listing of words and other items in what is called alphabetical order.
The basic ordering of the Latin alphabet (A B C D E F G H I J K L M N O P Q R S T U V W X Y Z), which is derived from the Northwest Semitic "Abgad" order, is well established, although languages using this alphabet have different conventions for their treatment of modified letters (such as the French é, à, and ô) and of certain combinations of letters (multigraphs). In French, these are not considered to be additional letters for the purposes of collation. However, in Icelandic, the accented letters such as á, í, and ö are considered to be distinct letters of the alphabet. In Spanish, ñ is considered a separate letter, but accented vowels such as á and é are not. The ll and ch were also considered single letters, but in 1994 the Real Academia Española changed the collating order so that ll is between lk and lm in the dictionary and ch is between cg and ci, and in 2010 the tenth congress of the Association of Spanish Language Academies changed it so they were no longer letters at all.
In German, words starting with sch- (which spells the German phoneme /ʃ/) are inserted between words with initial sca- and sci- (all incidentally loanwords) instead of appearing after initial sz, as though it were a single letter—in contrast to several languages such as Albanian, in which dh-, ë-, gj-, ll-, rr-, th-, xh- and zh- (all representing phonemes and considered separate single letters) would follow the letters d, e, g, l, n, r, t, x and z respectively, as well as Hungarian and Welsh. Further, German words with umlaut are collated ignoring the umlaut—contrary to Turkish which allegedly adopted the German graphemes ö and ü, and where a word like tüfek, would come after tuz, in the dictionary. An exception is the German telephone directory where umlauts are sorted like ä = ae since names as Jäger appear also with the spelling Jaeger, and are not distinguished in the spoken language.
It is unknown whether the earliest alphabets had a defined sequence. Some alphabets today, such as the Hanuno'o script, are learned one letter at a time, in no particular order, and are not used for collation where a definite order is required. However, a dozen Ugaritic tablets from the fourteenth century BC preserve the alphabet in two sequences. One, the ABCDE order later used in Phoenician, has continued with minor changes in Hebrew, Greek, Armenian, Gothic, Cyrillic, and Latin; the other, HMĦLQ, was used in southern Arabia and is preserved today in Ethiopic. Both orders have therefore been stable for at least 3000 years.
The Brahmic family of alphabets used in India use a unique order based on phonology: The letters are arranged according to how and where they are produced in the mouth. This organization is used in Southeast Asia, Tibet, Korean hangul, and even Japanese kana, which is not an alphabet.
Names of letters
The Phoenician letter names, in which each letter was associated with a word that begins with that sound (acrophony), continue to be used to varying degrees in Samaritan, Aramaic, Syriac, Hebrew, Greek and Arabic.
The names were abandoned in Latin, which instead referred to the letters by adding a vowel (usually e) before or after the consonant; the two exceptions were Y and Z, which were borrowed from the Greek alphabet rather than Etruscan, and were known as Y Graeca "Greek Y" (pronounced I Graeca "Greek I") and zeta (from Greek) – this discrepancy was inherited by many European languages, as in the term zed for Z in British English. Over time names sometimes shifted or were added, as in double U for W ("double V" in French), the English name for Y, and American zee for Z. Comparing names in English and French gives a clear reflection of the Great Vowel Shift: A, B, C and D are pronounced /eɪ, biː, siː, diː/ in today's English, but in contemporary French they are /a, be, se, de/. The French names (from which the English names are derived) preserve the qualities of the English vowels from before the Great Vowel Shift. By contrast, the names of F, L, M, N and S (/ɛf, ɛl, ɛm, ɛn, ɛs/) remain the same in both languages, because "short" vowels were largely unaffected by the Shift.
In Cyrillic originally the letters were given names based on Slavic words; this was later abandoned as well in favor of a system similar to that used in Latin.
Orthography and pronunciation
When an alphabet is adopted or developed to represent a given language, an orthography generally comes into being, providing rules for the spelling of words in that language. In accordance with the principle on which alphabets are based, these rules will generally map letters of the alphabet to the phonemes (significant sounds) of the spoken language. In a perfectly phonemic orthography there would be a consistent one-to-one correspondence between the letters and the phonemes, so that a writer could predict the spelling of a word given its pronunciation, and a speaker would always know the pronunciation of a word given its spelling, and vice versa. However this ideal is not usually achieved in practice; some languages (such as Spanish and Finnish) come close to it, while others (such as English) deviate from it to a much larger degree.
The pronunciation of a language often evolves independently of its writing system, and writing systems have been borrowed for languages they were not designed for, so the degree to which letters of an alphabet correspond to phonemes of a language varies greatly from one language to another and even within a single language.
Languages may fail to achieve a one-to-one correspondence between letters and sounds in any of several ways:
- A language may represent a given phoneme by a combination of letters rather than just a single letter. Two-letter combinations are called digraphs and three-letter groups are called trigraphs. German uses the tetragraphs (four letters) "tsch" for the phoneme [tʃ] and (in a few borrowed words) "dsch" for [dʒ]. Kabardian also uses a tetragraph for one of its phonemes, namely "кхъу". Two letters representing one sound occur in several instances in Hungarian as well (where, for instance, cs stands for [č], sz for [s], zs for [ž], dzs for [ǰ]).
- A language may represent the same phoneme with two or more different letters or combinations of letters. An example is modern Greek which may write the phoneme [i] in six different ways: ⟨ι⟩, ⟨η⟩, ⟨υ⟩, ⟨ει⟩, ⟨οι⟩, and ⟨υι⟩ (though the last is rare).
- A language may spell some words with unpronounced letters that exist for historical or other reasons. For example, the spelling of the Thai word for "beer" [เบียร์] retains a letter for the final consonant "r" present in the English word it was borrowed from, but silences it.
- Pronunciation of individual words may change according to the presence of surrounding words in a sentence (sandhi).
- Different dialects of a language may use different phonemes for the same word.
- A language may use different sets of symbols or different rules for distinct sets of vocabulary items, such as the Japanese hiragana and katakana syllabaries, or the various rules in English for spelling words from Latin and Greek, or the original Germanic vocabulary.
National languages sometimes elect to address the problem of dialects by simply associating the alphabet with the national standard. However, with an international language with wide variations in its dialects, such as English, it would be impossible to represent the language in all its variations with a single phonetic alphabet.
Some national languages like Finnish, Turkish, Serbo-Croatian (Serbian, Croatian and Bosnian) and Bulgarian have a very regular spelling system with a nearly one-to-one correspondence between letters and phonemes. Strictly speaking, these national languages lack a word corresponding to the verb "to spell" (meaning to split a word into its letters), the closest match being a verb meaning to split a word into its syllables. Similarly, the Italian verb corresponding to 'spell (out)', compitare, is unknown to many Italians because spelling is usually trivial, as Italian spelling is highly phonemic. In standard Spanish, one can tell the pronunciation of a word from its spelling, but not vice versa, as certain phonemes can be represented in more than one way, but a given letter is consistently pronounced. French, with its silent letters and its heavy use of nasal vowels and elision, may seem to lack much correspondence between spelling and pronunciation, but its rules on pronunciation, though complex, are actually consistent and predictable with a fair degree of accuracy.
At the other extreme are languages such as English, where the pronunciations of many words simply have to be memorized as they do not correspond to the spelling in a consistent way. For English, this is partly because the Great Vowel Shift occurred after the orthography was established, and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. Even English has general, albeit complex, rules that predict pronunciation from spelling, and these rules are successful most of the time; rules to predict spelling from the pronunciation have a higher failure rate.
Sometimes, countries have the written language undergo a spelling reform to realign the writing with the contemporary spoken language. These can range from simple spelling changes and word forms to switching the entire writing system itself, as when Turkey switched from the Arabic alphabet to a Latin-based Turkish alphabet.
The sounds of speech of all languages of the world can be written by a rather-small universal phonetic alphabet. A standard for this is the International Phonetic Alphabet.
- A Is For Aardvark
- Alphabet book
- Alphabet effect
- Alphabet song
- Alphabetical order
- Butterfly Alphabet
- Character encoding
- Constructed script
- English alphabet
- ICAO spelling alphabet
- List of alphabets
- Thai script
- Coulmas, Florian (1996). The Blackwell Encyclopedia of Writing Systems. Oxford: Blackwell Publishing. ISBN 0-631-21481-X.
- Millard 1986, p. 396
- Haarmann 2004, p. 96
- "alphabet". Merriam-Webster.com.
- Lynn, Bernadette (2004-04-08). "The Development of the Western Alphabet". h2g2. BBC. Retrieved 2008-08-04.
- Daniels and Bright (1996), pp. 74–75
- J. C. Darnell, F. W. Dobbs-Allsopp, Marilyn J. Lundberg, P. Kyle McCarter, and Bruce Zuckermanet, “Two early alphabetic inscriptions from the Wadi el-Hol: new evidence for the origin of the alphabet from the western desert of Egypt.” The Annual of the American Schools of Oriental Research, 59 (2005).
- Coulmas (1989), p. 140–141.
- Ugaritic Writing online
- Daniels and Bright (1996), pp 92-96
- "Coulmas" (1989) p.147.
- "上親制諺文二十八字…是謂訓民正音(His majesty created 28 characters himself... It is Hunminjeongeum (original name for Hangul))", 《세종실록 (The Annals of the Choson Dynasty : Sejong)》 25년 12월.
- For critics of the abjad-abugida-alphabet distinction, see Reinhard G. Lehmann: "27-30-22-26. How Many Letters Needs an Alphabet? The Case of Semitic", in: The idea of writing: Writing across borders / edited by Alex de Voogt and Joachim Friedrich Quack, Leiden: Brill 2012, p. 11-52, esp p. 22-27
- Reinhard G. Lehmann: "27-30-22-26. How Many Letters Needs an Alphabet? The Case of Semitic", in: The idea of writing: Writing across borders / edited by Alex de Voogt and Joachim Friedrich Quack, Leiden: Brill 2012, p. 11-52
- Real Academia Española. "Spanish Pronto!: Spanish Alphabet." Spanish Pronto! 22 April 2007. January 2009 Spanish Pronto: Spanish ↔ English Medical Translators.
- "La “i griega” se llamará “ye”". Cuba Debate. 2010-11-05. Retrieved 12 December 2010. Cubadebate.cu
- Millard, A.R. "The Infancy of the Alphabet", World Archaeology 17, No. 3, Early Writing Systems (February 1986): 390–398. page 395.
- Coulmas, Florian (1989). The Writing Systems of the World. Blackwell Publishers Ltd. ISBN 0-631-18028-1.
- Daniels, Peter T.; Bright, William (1996). The World's Writing Systems. Oxford University Press. ISBN 0-19-507993-0. Overview of modern and some ancient writing systems.
- Driver, G. R. (1976). Semitic Writing (Schweich Lectures on Biblical Archaeology S.) 3Rev Ed. Oxford University Press. ISBN 0-19-725917-0.
- Haarmann, Harald (2004). Geschichte der Schrift [History of Writing] (in German) (2nd ed.). München: C. H. Beck. ISBN 3-406-47998-7.
- Hoffman, Joel M. (2004). In the Beginning: A Short History of the Hebrew Language. NYU Press. ISBN 0-8147-3654-8. Chapter 3 traces and summarizes the invention of alphabetic writing.
- Logan, Robert K. (2004). The Alphabet Effect: A Media Ecology Understanding of the Making of Western Civilization. Hampton Press. ISBN 1-57273-523-6.
- McLuhan, Marshall; Logan, Robert K. (1977). Alphabet, Mother of Invention. Etcetera. Vol. 34, pp. 373–383
- Millard, A. R. (1986). "The Infancy of the Alphabet". World Archaeology 17 (3): 390–398. doi:10.1080/00438243.1986.9979978.
- Ouaknin, Marc-Alain; Bacon, Josephine (1999). Mysteries of the Alphabet: The Origins of Writing. Abbeville Press. ISBN 0-7892-0521-1.
- Powell, Barry (1991). Homer and the Origin of the Greek Alphabet. Cambridge University Press. ISBN 0-521-58907-X.
- Powell, Barry B. 2009. Writing: Theory and History of the Technology of Civilization, Oxford: Blackwell. ISBN 978-1-4051-6256-2
- Sacks, David (2004). Letter Perfect: The Marvelous History of Our Alphabet from A to Z (PDF). Broadway Books. ISBN 0-7679-1173-3.
- Saggs, H. W. F. (1991). Civilization Before Greece and Rome. Yale University Press. ISBN 0-300-05031-3. Chapter 4 traces the invention of writing
|Look up alphabet in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to Alphabet.|
- The Origins of abc
- "Language, Writing and Alphabet: An Interview with Christophe Rico", Damqātum 3 (2007)
- Alphabetic Writing Systems
- Michael Everson's Alphabets of Europe
- Evolution of alphabets, animation by Prof. Robert Fradkin at the University of Maryland
- How the Alphabet Was Born from Hieroglyphs—Biblical Archaeology Review |
Conclusions about the deep Earth
The overall oblate shape of the Earth was established by French Academy expeditions between 1735 and 1743. The Earth’s mean density and total mass were determined by the English physicist and chemist Henry Cavendish in about 1797. It was later ascertained that the density of rocks on the Earth’s surface is significantly less than the mean density, leading to the assumption that the density of the deeper parts of the planet must be much greater.
The Earth’s magnetic field was first studied by William Gilbert of England during the late 1500s. Since that time a long sequence of measurements has indicated its overall dipole nature, with ample evidence that it is more complex than the field of a simple dipole. Investigators also have demonstrated that the geomagnetic field changes over time. Moreover, they have found that magnetic constituents within rocks take on magnetic orientations as the rocks cool through their Curie point or, in the case of sedimentary rocks, as they are deposited. A rock tends to retain its magnetic orientation, so that measuring it provides information about the Earth’s magnetic field at the time of the rock’s formation and how the rock has moved since then. The field of study specifically concerned with this subject is called paleomagnetism.
Observations of earthquake waves by the mid-1900s had led to a spherically symmetrical crust–mantle–core picture of the Earth. The crust–mantle boundary is marked by a fairly large increase in velocity at the Mohorovičić discontinuity at depths on the order of 25–40 kilometres on the continents and five–eight kilometres on the seafloor. The mantle–core boundary is the Gutenberg discontinuity at a depth of about 2,800 kilometres. The outer core is thought to be liquid because shear waves do not pass through it.
Scientific understanding of the Earth began undergoing a revolution from the 1950s. Theories of continental drift and seafloor spreading evolved into plate tectonics, the concept that the upper, primarily rigid part of the Earth, the lithosphere, is floating on a plastic asthenosphere and that the lithosphere is being moved by slow convection currents in the upper mantle. The plates spread from the mid-oceanic ridges where new oceanic crust is being formed, and they are destroyed by plunging back into the asthenosphere at subduction zones where they collide. Lithospheric plates also may slide past one another along strike-slip or transform faults (see also plate tectonics: Principles of plate tectonics). Most earthquakes occur at the subduction zones or along strike-slip faults, but some minor ones occur in rift zones. The apparent fit of the bulge of eastern South America into the bight of Africa, magnetic stripes on the ocean floors, earthquake distribution, paleomagnetic data, and various other observations are now regarded as natural consequences of a single plate-tectonics model. The model has many applications; it explains much inferred Earth history and suggests where hydrocarbons and minerals are most likely to be found. Its acceptance has been widespread as economic conclusions have borne fruit.
An extensive series of boreholes drilled into the seafloor under the Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES) program has established a relatively simple picture of the crust beneath the oceans (see also undersea exploration). In the rift zones where the plates comprising the Earth’s thin crust separate, material from the mantle wells upward, cools, and solidifies. The molten mantle material that flows onto the seafloor and cools rapidly is called pillow basalt, while the underlying material that cools more slowly forms gabbros and sheeted dikes. Sediments gradually accumulate on top of these, producing a comparatively simple pattern of sediment, basaltic basement, gabbroic layering, and underlying mantle structure. Much of the heat flow from the solid Earth into the oceans results from the slow cooling of the oceanic rocks. Heat flow gradually declines with distance from the spreading centres (or with the length of time since solidification). As the oceanic rocks cool they become slightly denser, and isostatic adjustment causes them to subside slightly so that oceanic depths become greater. The oceanic crust is relatively thin, measuring only about five–eight kilometres in thickness. Nearly all oceanic rocks are fairly young, mostly Jurassic or younger (i.e., less than 200,000,000 years old), but relics of ocean floor rocks have been found in ophiolite complexes as old as 3.8 billion years.
The crust within the continents, unlike the oceanic crust, is considerably older and thicker and appears to have been formed in a much more complex way. Because of its greater thickness, diversity, and complexity, the continental crust is much more difficult to explore. In 1975 the U.S. Geodynamics Committee initiated a research program to explore the continental crust using seismic techniques developed by private industry for the purpose of locating petroleum accumulations in sedimentary rocks. Since then its investigations have been conducted in a number of locales throughout the United States. Several notable findings have resulted from these studies, the most spectacular of which was the discovery of a succession of very low-angle thrust sheets beneath the Appalachian Mountains. This discovery, made from seismic reflection profiling data, influenced later theories on continent formation.
Test Your Knowledge
Planet Earth Quiz
The success of the U.S. crustal studies program has spawned a series of similar efforts in Australia, Canada, Europe, India, the Tibet Autonomous Region of China, and elsewhere, and seismic investigation of the continental crust continues to be one of the most active areas of basic exploration.
The desire to detect nuclear explosions in the years following World War II led to the establishment of a worldwide network of uniform seismograph stations. This has greatly increased the number and reliability of earthquake measurements, the major source of information about the Earth’s interior. The construction of large-array seismograph stations has made it possible to determine the directions of approach of earthquake waves and to sort out overlapping wave trains. Computer processing allows investigators to separate many wave effects from background noise and to analyze the implications of the multitude of observations now available.
The assumptions made in the past that significant property variations occur mainly in the vertical direction were clearly an oversimplification. Today, investigation of the deep Earth concentrates primarily on determining lateral (horizontal) changes and on interpreting their significance. Seismic tomographic analysis (see above) records variations in the seismic velocity of Earth’s subsurface and has revolutionized the imaging and definition of mantle plumes (hot material originating from the core-mantle boundary) and subducting lithospheric plates. |
Middle English chivalrie, a late 13th century loan from Old French word chevalerie, "knighthood, chivalry, nobility, cavalry" (11th century), the -erie abstract of chevaler "knight, horseman", from Medieval Latin caballarius (“horseman, knight”), a derivation from caballus (“horse”). Medieval Latin caballaria (“knighthood, status or fief of a knight”) dates to the 12th century.
chivalry (usually uncountable, plural chivalries)
- (now rare, historical) Cavalry; horsemen armed for battle.
- 1999, George RR Martin, A Clash of Kings, Bantam 2011, p. 529:
- ‘Most of the lords who rode with Lord Renly to Storm's End have gone over banner-and-blade to Stannis, with all their chivalry.’
- (obsolete) The fact or condition of being a knight; knightly skill, prowess.
- The ethical code of the knight prevalent in Medieval Europe, having such primary virtues as mercy towards the poor and oppressed, humility, honor, sacrifice, fear of God, faithfulness, courage and utmost graciousness and courtesy to ladies.
- Courtesy, respect and honorable conduct between opponents in wartime.
- Courteous behavior, especially that of men towards women.
- (UK, law, historical) A tenure of lands by knightly service.
Courteous behavior, especially that of men towards women |
More Lessons for Grade 5 Math
Videos, stories and songs to help Grade 5 students learn about understanding integers.
Understanding integers can be easy when you use a number line. Some key words that can help you identify positive or negative numbers.
What are Integers?
Integers and Non-integers
Identifying numbers as integers or non-integers
Introduction to Integers
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. |
Electrical Engineer / Electrician Interview question
General terms used in electrical engineering
This is the peak current that occurs when power is first applied to a unit. It is measured in peak amperes
This is the relationship of the actual power usage to the apparent power usage.
For example :
Actual Power = 100 watts of power when averaged over a period of time.
Apparent Power = RMS Current X RMS Voltage.
RMS Current = 1.28 A
RMS voltage = 120Vac
Apparent Power = 1.28 A x 120 Vac
= 153.6 VA
Power Factor = 100 watts/153.6 VA
= 0.651 or 65.1%.
True RMS current
The acronym RMS stands for Root Mean Square. The definition of Root Mean Square is the square root of the mean square of the variable values taken throughout one cycle.
This is a measurement of the normal running current, but instead of an average or RMS value, the peak value is given. This is the highest value that the current reaches during a given half cycle.
A typical load of the rectifier/capacitor-input type.
This is the integral of the instantaneous voltage multiplied by the instantaneous current over a half cycle.
BTU/hr stands for British Thermal Units per hour. The British Thermal Unit is equivalent to the quantity of heat necessary to raise one pound of water one degree Fahrenheit (at 39.2°F). It provides a measurement of power converted to a form that is useful for sizing heating and cooling systems. |
We’ve travelled to the Carboniferous period, which started 359 million years ago and lasted until 300 million years ago.
I tell you what – it is hot!
You definitely don’t need a jacket in this period. Most of the land is hot and swampy – perfect conditions for trees to grow and start covering the planet.
This was incredibly important for creating what became one of our main sources of energy (and barbeques)… coal!
When these trees died, they formed thick layers of dead plants that, over many, many millions of years, got squished and turned into coal.
In fact, most of the coal in the world today comes from these trees that grew during the Carboniferous period 300 million years ago!
This was the time for lots of big trees and big animals!
Because it was so warm, lots of trees and plants grew. Large trees covered with bark and huge ferns grew in big swamps, but there wasn’t any grass yet. The atmosphere was full of oxygen because of the amount of plants that grew. This allowed plants and animals to reach HUGE sizes. When the huge trees and ferns died, they fell into water without any bacteria to help them decompose, these plants formed peat beds. Eventually, with the weight of layers and layers, these peat beds turned to coal.
During this period, animals appeared on the land instead of the sea. There was more land for them to live on, too!
Tetrapods were four-legged vertebrates that began to move onto the land. More of these evolved during the Carboniferous Period too.
Some were early amphibians that began their lives in the water and later moved onto land. Some were early reptiles that developed leathery skin as they moved to the parts of land that were very dry. These early reptiles also developed leathery coverings for their eggs too so the insides didn’t dry out while the baby inside developed.
This is not a great time for you if you’re scared of creepy crawlies!
Insects were also HUGE because of the oxygen in the air.
One of these giant insects was called Meganeura, and it was an ancestor of the dragonfly. It wings were up to 75 cm wide – that’s nearly a meter!
Add a comment |
5 Motions of the Earth You Didn’t Know Existed – Learning Mind. We have learned from our early school time that the Earth has two motions: revolution around the Sun that is 365 days 5 hour and 48 minutes (tropical year) and the Earth’s rotation around its own axis that takes 23 hours 56 minutes and 4 seconds (sidereal day), 24 hours (solar day).
However, the Earth has other motions that are not well known to the public. In this article, we intend to have a glimpse on some of these motions of the planet that we live on. Nature Unbound I: The Glacial Cycle. By Javier Insights into the debate on whether the Holocene will be long or short.
Summary: Milankovitch Theory on the effects of Earth’s orbital variations on insolation remains the most popular explanation for the glacial cycle since the early 1970’s. Wikiwand. The Earth-Moon-Sun System Chapter Motions of Earth The main two motions of Earth are rotation and revolution –Rotation – the spinning of a body. - ppt download. Researchers find way to measure speed of spinning object using light's orbital angular momentum. (Phys.org) —Researchers in Scotland have devised a way to use the Doppler Effect to discern the spin speed of a rotating object where the object is directly facing the light source.
In their paper published in the journal Science, the team describes lab experiments they conducted that allowed them to observe a frequency shift proportional to the product of the rotation frequency of an object and the orbital angular momentum of the light. Most everyone is familiar with the Doppler Effect—it's what makes the sound of sirens change in pitch as they pass by. Scientists have been using this phenomenon in various ways for years to learn more about the world around us—one application was determining that most observable stars and galaxies are moving away from us, which led to the theory of the expansion of the universe. To test their theory, the researchers fired a laser at a spinning plate in their lab then used a light detector to measure the degree of OAM.
The Cosmology/Climate Connection - How Extraterrestrial Forces Influence The Weather. 1.
Paul R. Weissman, The Solar System and Its Place in the Galaxy, [in Encyclopedia of the Solar system] 2. Matt Beedle (1999), Milankovitch Cycles [website of a web project on glaciers and glacial geology at Montana State Universtiy, no longer online]. Vlbi. Radio Telescope for Geodetic VLBI Introduction Radio Telescopes are utilised in Geodesy for VLBI (Very Long Baseline Interferometry), which is the most accurate geodetic technique for determining both the terrestrial and celestial reference frames (TRF & CRF).
This in turn helps one to determine Earth's orientation in space. Precise measurements of rotation and orbital angular momentum in the binary systems of stars, as a test of formation and evolution models. Principal Investigator: Piotr Waldemar Sybilski, MSc, Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences Title: Precise measurements of rotation and orbital angular momentum in the binary systems of stars, as a test of formation and evolution modelsGrant: PRELUDIUM 2 Figure 1: The author working with one of the instruments used to study binary systems.
In the background the 1.9m telescope in the South African Astronomical Observatory (SAAO, South Africa). Fot. Hydrogen Fine Structure. Researchers find way to measure speed of spinning object using light's orbital angular momentum. Wikiwand. Scientists Have Just Found a New Form of Light. This article was originally posted on Inverse.
There’s big science news out of Ireland today that fundamentally alters the way we study and investigate the nature of light: physicists from Trinity College Dublin’s School of Physics have discovered a new form of light. It’s made of photons that travel differently than any other light scientists have previously observed. The breakthrough was made during an investigation into the angular momentum of light, which is what happens when something can be rotated around the axis along whatever it’s traveling with. While we think of a light beam as a straight line, imagine it moving in a corkscrew motion around a central axis: A light beam with orbital angular momentum interacting with a particle.
Chapter 6: Nuclear Structure Abby Bickley University of Colorado NCSS ‘99 Additional References: Choppin (CLR), Radiochemistry and Nuclear Chemistry, 2nd. - ppt download. Conservation of Angular Momentum and Kepler’s Second Law. Conservation laws are very important laws for celestial objects in the universe.
Without conservation laws, all these celestial objects will not obey predictable motions as they do in this universe. I am going to talk about conservation of angular momentum in this post. Any objects orbiting or rotating have angular momentum. To change angular momentum of the object, we need to apply a “twisting force”, or torque. Conservation of momentum holds that the total angular momentum in a closed system is always conserved.
Schoolphysics. The Mechanical Universe: Energy and Eccentricity. Kepler's Laws. Ellipses and Elliptic Orbits. |
Just as banks store away only the most valuable possessions in the most secure safes, cells prioritise which genes they guard most closely, researchers at the European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-EBI) have found. The study, just published online in Nature, shows that bacteria have evolved a mechanism that protects important genes from random mutation, effectively reducing the risk of self-destruction. The findings answer a question that has been under debate for half a century and provide insights into how disease-causing mutations arise and pathogens evolve.
"We discovered that there must be a molecular mechanism that preferentially protects certain areas of the genome over others," says Nicholas Luscombe, who led the research at EMBL-EBI. "If we can identify the proteins involved and uncover how this works, we will be even closer to understanding how mutations that lead to diseases like cancer can be prevented."
Mutations are the reason each of us is unique. These changes to our genetic material are at the root of variation between individuals, and between cells within individuals. But they also have a darker side. If it affects an important gene -- for example, rendering a tumour-suppressing gene useless -- a mutation can have disastrous consequences. Nevertheless, protecting all genes from mutation would use up too many of the cell's resources, just like holding all deposits in maximum-security safes would be prohibitively expensive. Iñigo Martincorena, a PhD student in Luscombe's lab, has now found that cells evolved a 'risk management' strategy to address this issue.
Looking at 120,000 tiny genetic mutations called single nucleotide polymorphisms (SNPs) in 34 strains of the bacterium E. coli, the scientists were able to quantify how random the mutation rate was in different areas of the bacterial genomes. Their results showed that key genes mutate at a much lower rate than the rest of the genetic material, which decreases the risk of such genes suffering a detrimental mutation. "We were struck by how variable the mutation rate appears to be along the genome," says Martincorena. "Our observations suggest these bacteria have evolved a clever mechanism to control the rate of evolution in crucial areas of the genome."
Using population genetics techniques, the researchers were able to disentangle the effects of mutation rate and natural selection on mutations, settling a long-standing debate in the field. Scientists have long thought that the chances of a mutation occurring were independent of its value to an organism. Once the mutation had occurred, it would undergo natural selection, spreading through the population or being eliminated depending on how beneficial or detrimental the genetic change proved to be.
"For many years in evolution there has been an assumption that mutations occur randomly, and that selection 'cleans them up'," explains Martincorena. "But what we see here suggests that genomes have developed mechanisms to avoid mutations in regions that are more valuable than others."
Observations from studies of cancer genomes suggest that similar mechanisms may be involved in the development of cancers, so Luscombe and colleagues would now like to investigate exactly how this risk-managing gene protection works at a molecular level, and what role it may play in tumour cells.
Cite This Page: |
Amanda Podany here takes readers on a vivid tour through a thousand years of ancient Near Eastern history, from 2300 to 1300 BCE, paying particular attention to the lively interactions that took place between the great kings of the day. Allowing them to speak in their own words, Podany reveals how these leaders and their ambassadors devised a remarkably sophisticated system of diplomacy and trade. What the kings forged, as they saw it, was a relationship of friends-brothers-across hundreds of miles. Over centuries they worked outways for their ambassadors to travel safely to one another's capitals, they created formal rules of interaction and ways to work out disagreements, they agreed to treaties and abided by them, and their efforts had paid off with the exchange of luxury goods that each country wanted from the other.Tied to one another through peace treaties and powerful obligations, they were also often bound together as in-laws, as a result of marrying one another's daughters. These rulers had almost never met one another in person, but they felt a strong connection - a real brotherhood - which gradually madewars between them less common. Indeed, any one of the great powers of the time could have tried to take over the others through warfare, but diplomacy usually prevailed and provided a respite from bloodshed. Instead of fighting, the kings learned from one another, and cooperated in peace. A remarkable account of a pivotal moment in world history - the establishment of international diplomacy thousands of years before the United Nations - Brotherhood of Kings offers a vibrantly written history of the region often known as the "cradle of civilization." |
Embryonic stem cells
Embryonic stem cells are taken from a young embryo. They are generally taken from the inner cell mass of a blastocyst. They may also be derived from early blastomeres taken from a "morula", which is an embryo of 4 to 32 cells. An embryo reaches the blastocyst stage about 4–5 days after fertilization. At that point they contain about 50-150 cells. However, implantation (attaching to the uterine wall) does not occur until at least 9 days after fertilization. Neurulation (primordial development of the central nervous system) occurs about a week later. Embryonic stem cells are harmful to the recipient because they tend to grow tumors, and safer alternatives that do not use embryos are available, but many abortion supporters still insist on using embryonic stem cells in order to justify the concept of abortion.
Embryonic stem cells can differentiate into the three "germ layers": ectoderm, endoderm and mesoderm. The 220 types of cells in humans are all based on these germ layers. Some scientists have shown effectively that embryonic stem cells have greater differentiation potential, and divide into greater numbers, than adult stem cells can.
In 2001 President Bush allowed federal funding for research performed only on the 60 human embryonic stem cell lines that were in existence at the time. Cell lines drift genetically and morphologically over time, and due largely to the ban on producing more embryonic stem cell lines with NIH funds, researchers focused on producing embryonic-like stem cells from adult (differentiated) cells; these cells are known and induced pluripotent stem cells (iPS cells). These cells have similar properties to embryonic stem cells but do no require the destruction of or any use of human embryos - humans donate their own cells (e.g. skin cells) and though slight genetic manipulation these cells, they can be reverted into essentially an undifferentiated state and gain the ability to become nearly any cell in the body (i.e. pluripotency). This major advance in the stem cell and medical fields came much sooner than most researchers expected; this can be credited largely to the Bush ban regarding human embryonic stem cells.
Despite the strong genetic similarities, the production of chimpanzee stem cell lines have been problematic, though stem cells have been produced for other non-human primates.
- Those who value human life from the point of conception, oppose embryonic stem cell research because the extraction of stem cells from this type of an embryo requires its destruction. In other words, it requires that a human life be killed.
Many liberals, on the other hand, claim that all the embryos used for stem cell research are already condemned to death because most of them are the byproduct of in vitro fertilization, a procedure that requires the production of a number of embryos far in excess of the number that are successfully implanted. Few parents choose to cryogenically store these excess embryos, and with no further use for these embryos, they are often destroyed. Thus, liberals claim that embryonic stem cell research does not result in the destruction of any additional embryos. Conservatives argue that unused embryos can be adopted within a 10-year window of viability and thus saved from destruction; however, embryo adoption is very rare and far too few prospective adoptive parents exist for all unused embryos to be adopted. |
Optipedia • SPIE Press books opened for your reference.
Excerpt from Field Guide to Spectroscopy
A thermal detector absorbs radiation and changes temperature. Because the power in absorbed radiation is typically rather small (<10-7 W), the detector itself should be small so that it has a low heat capacity.
A thermocouple is the joining of two dissimilar-metal or metal alloy wires or films. When this occurs, a potential difference is formed between the other ends of the metals. Since potential differences are temperature-dependent (called the Seebeck effect), temperature values or changes in temperatures can be determined by calibration.
|Type J||Fe vs. Cu-Ni|
|Type K||Ni-Cr vs. Ni-Al|
|Type E||Ni-Cr vs. Cu-Ni|
|Type T||Cu vs. Cu-Ni|
|Type S, R||Pt-Rh vs. Pt|
|Type N||Ni-Cr-Si vs. Ni-Si-Mg|
A bolometer is a semiconductor or thin metal strip whose resistance decreases with temperature. They are small, and typically painted black to better absorb radiation.
A Golay detector is a small pneumatic chamber filled with gas and covered with a thin membrane. When radiation strikes the detector, the gas warms, increasing the internal pressure and deforming the membrane. Deflection of the membrane can be measured mechanically or optically.
A pyroelectric detector uses a crystal of a pyroelectric material, which has a strong temperature-dependent electric polarization. The change in electric polarization causes a measurable current, which changes fast enough to respond to the output of an interferometer. The most common material used is deuterated triglycine sulfate, (dTGS). Lithium tantalate (LiTaO3) and lead zinc titanate (PZT) are also used. |
The word envelope is derived from the French word enveloppe (from envelopper, which means to envelop).
There are a number of difficulties in spelling this tricky word. Native French speakers often struggle with envelope in English because it has one p rather than two. But even native English speakers can have trouble: namely, understanding when to use envelope and when to use envelop.
Envelope (with an e, pronounced Ehn-vuh-LOPE or Ahn-vuh-LOPE) is a noun meaning a wrapper or enclosure. When applied to aircraft or other technology, it means a set of accepted performance limits. This is where we get the phrase “pushing the envelope.”
The explorers were excited when the new spacecraft was completed. They hoped to push the envelope of space exploration during their upcoming voyage.
Envelop (without the e, pronounced ehn-VEH-lup) is a verb meaning to completely enclose or surround something. Like many other verbs, -ed is added to the end when it is used in the past tense (enveloped).
The black velvet night enveloped the explorers’ spacecraft as they sped away from the Earth to a faraway galaxy.
For several years, Earth heard nothing from the brave pioneers. The head of the Space Exploration Agency felt as if he were enveloped in despair. His daughter had insisted on joining the outbound team, and now she was lost to him. He wondered why he even bothered to come in to the office any more.
Then one day he walked in and discovered a strangely glowing envelope on his desk. He carefully opened it up and read the words, “We made it, Dad!”
There was a blaze of warm light, and he felt his daughter’s arms envelop him in an enormous hug.
Bonus Word: Endeavour
Endeavour is another tricky word to spell.
Endeavour is also tricky because the ending is spelled -our in British English and -or in American English. Even NASA had trouble getting this one sorted out.
Which spelling do you prefer, endeavor or endeavour? And where will you go exploring today?
Picture of space shuttle Endeavour from NASA
Stay tuned for tomorrow’s post, where I will flatten the formidable letter F…
© Sue Archer and Doorway Between Worlds, 2015 |
(Last Updated on : 12/11/2010)
Indian temple architecture can be referred to as a classical form of Indian art and cultural richness, which is entirely reflected in the chiselled wonders of various temples. Architecture found in ancient Indian temples exhibit the country`s olden, yet prosperous and splendid culture. These temples, some dating back to more than 1700 years, flaunt meticulous carving and sculptures, bearing testimony of the rare craftsmanship and creativity of the artisans, sculptors and artists of India of the yesteryears. The temple architecture also furnish ample evidence of the vision of emperors and rulers of bygone periods who have successfully left behind a heritage that modern India is proud to be a part of.
The sthapathis and shilpis developed India`s temple architecture. Hindu temple architecture in ancient India is believed to have germinated for over two thousand years. This architectural execution of the temples came about within the stringent outlines, deduced solely out of religious musings. As a result architect was severely instructed to bind his designing and stick to the ancient principal dimensions and frameworks and strict constitutions, which has since continued to stay untouched over the period of time. Following the set pattern of Indian temple architecture, architectural elements and ornamental particulars in a Hindu temple commenced its prolonged journey in the early wood, timber and thatched constructions. This pattern was then to persist for centuries in one form or another in the stone structures, in spite of the original purpose and perspective being lost forever. And this once-more fresh pattern can be examined from the horseshoe shaped window. The source of this kind of window can be retraced from the chaitya arch doorway, first at the Lomash Rishi cave in the Barabar Hills employed in the 3rd century B.C. It was then metamorphosed later into a dormer window, renamed a gavaksha. In due course, the gavaksha was used exactingly as the decorative design of lattice-like forms, witnessed on the towers of medieval Hindu temples.
Throughout the greater part of India, the sanctuary as a whole is known as the Vimana is a common structure in major temples of North India which were built during the Rajput period. The most complete illustrations of the fully formed temple structure are the tenth century examples at Khajuraho, Central India. In the temple architecture the religious motive was predominant. Temple construction followed certain standards as far as its structure and making procedure is concerned. The masons showed judicious observance of the laws of gravity, an appreciation of the grandeur of mass and the rich value of shadows. Elegant proportions, graceful contours and rich surface treatment are features of the North Indian temples especially. The halls are richly decorated with sculptures which are dedicated to Mahadeva, Lord Vishnu and Jagdamba and to the Jain deities. Many Jain temples were also built during this period. Mount Abu
has many Jain temples.
Royal patronage was another exceedingly pregnant factor on the aesthetic evolvement of Indian temple architecture and, regional styles are often distinguished by the dynasty that gave rise to them. The Pallavas, Cholas, Hoysalas, Guptas, Chalukyas and Chandelas were such royal clients who had contributed to make ancient Indian architecture proud to this date.
During the rule of Chalukyas of Badami
was Indian architecture witnessed a glorious era. Badami Chalukyas established the foundations of cave temple architecture, on the banks of the Malaprabha River
. The styles include Aihole
, Pattadakal and Badami
. The sites were built out of sandstone which is cut into enormous blocks. |
Climate change poses the greatest threat to medium sized predators such as foxes as it forces them to spend more time hunting for food, British researchers found.
Medium-sized carnivores which generally weigh between one and ten kilograms, are more vulnerable because they spend the most time foraging.
As the climate affects their prey they must spend more time hunting to survive, scientists said.
And they found failing to diversify their prey could put the species at greater risk of climate change.
Ecologists had previously believed foraging time decreased as animal size increased, but the team discovered that wasn’t the case.
Co-author Dr Chris Carbone, from the Zoological Society of London, said: “Medium-sized predators are forced to search for food for longer periods of time on a daily basis because they tend to feed on prey that are small compared to their own size.
“Prey that are much smaller than a predator are hard to find and catch, and therefore do not easily satisfy the predator’s energy needs and provide insufficient ‘bang for the buck’.”
The research, published in the Nature Ecology and Evolution journal, looked at previous studies of 73-species of land-based carnivore.
Activity data was gathered using tracking methods including radio collars and GPS, then a mathematical model was used to predict the risk of climate change on each.
Species ranged from small predators like weasels to some of the largest, such as tigers, and the scientists found the medium-sized spent the greatest part of their day foraging.
Other examples of mid-size predators included the Malay civet and Iriomote cat.
Study co-author Dr Samraat Pawar, from Imperial College London, said: “We propose a simple mathematical model that predicts how foraging time depends on body size.
“This can help predict potential risks to predators facing environmental change.
“Habitat changes can mean that predators have to move more to find the same amount of food, causing them greater stress.
“This impacts the health of the individual, and therefore the health of the population.
“Our approach could be used to better understand the diets of other groups of species as well improving our knowledge of threats of climate change and habitat loss in a wider range of species.”
Co-author Matteo Rizzuto, formerly a Master’s student in the Department of Life Sciences at Imperial, said each species’ vulnerability will also depend on the kind of prey they feed on.
He added: “If they are able to adapt their diet and diversify their prey, they may fare better.” |
Loading Module: Cloning |====|100%
Cloning was first attempted in the 20th century, and quickly became a “Hot” topic. Humanity could not agree whether cloning was beneficial or detrimental to humanity. The argument extended into the 21st and 22nd century, where strict bans against human cloning were put into effect.
There was exceptions, such as the 2088 Organ Reproduction act, that allowed the reproduction of organs for patients that required a replacement. This meant that organ donation became at thing of the past, as all patients simply received cloned organs instead.
In the 22nd century, genetic modification and cloning of plants and animals became extremely common. This was due to the discovery of “genetic scrambling” which overcame the problem of the genetic material of cloned animals and plants eventually degrading the more clones of them were created, by creating a new set of genetic material each time they were cloned. This meant they were not exact copies, but rather new versions of the original. This meant farms also started housing research laboratories, as food across the globe became scarce. By the end of the 22nd century, every human was eating cloned food.
The Human Cloning restriction was partially lifted in 2340, when the military started experimenting with “cloned soldiers”, using rapid age acceleration. These experiments were successful, and even today the military has a Clone division. These clones have little freedoms, and in order to stay above the cloning laws and restrictions, they are created with almost child like IQs. At birth they are also implanted with knowledge, such as weaponry also they are taught to be loyal to the death to the UTW. These clones are “retired” at the age of 10, because after that age the mental condition snaps and causes most clones to develop free will.
Today, cloning is still mostly outlawed, except for the UTW and the Military. Clones are not considered to be “true humans” and therefore do not have the same rights as humans do. The problem is that clones are usually so hard to distinguish from the real thing, that without a genetic test it is impossible to know who is a clone. Though a genetic profile will quickly show, as genetic markers are placed everywhere in a clones genetic code. |
Two main streams of Social Studies exist:
10-1, 20-1, and 30-1 = Academic in content, especially in reading and writing.
10-2, 20-2, and 30-2 = The academic content is not studied as extensively as in the above "-1" stream.
The primary goal of Social Studies is to produce active, responsible citizens. Students and teachers do this by asking questions, talking to one another and accessing technology in addressing the issues that arise in class. By studying the past, the distribution of wealth and the face of this place called earth, students will have an understanding of the world we live in today.
Social Studies 10-1: Perspectives on Globalization 5 credits
Students will explore globalization, the process by which the world's citizens are becoming increasingly connected and interdependent. Students will explore the origins of globalization, the implications of economic globalization and the impact of globalization internationally on lands, cultures, human rights and quality of life. A multiple perspectives approach will allow students to examine the effects of globalization on peoples in Canada and other locations, including the impact on Aboriginal and Francophone communities. Students will formulate individual responses to emergent issues related to globalization. Globalization is a dynamic process affecting environments, economies, political systems and cultures throughout the world. The extent to which the effects are beneficial or detrimental is a subject for research and informed discussion. Students have an opportunity to explore the relationships among globalization, citizenship and identity and to enhance skills for citizenship in a globalizing world.
To what extent should we embrace globalization?
Social Studies 10-2: Living in a Globalizing World 5 credits
Students will examine globalization, the process by which the world is becoming increasingly connected and interdependent. They will explore historical aspects of globalization and the impact that globalization has on their lives and the lives of others. Through a multiple perspectives approach, students will examine the effects of globalization on peoples in Canada and throughout the world, including the impact on Aboriginal and Francophone communities. Students will develop skills to respond to issues emerging in an increasingly globalized world. Globalization is an ongoing process that is creating major economic, environmental, political, social and cultural change around the world. People disagree as to whether globalization benefits or harms humanity. It is important that students have the opportunity to explore the relationships among globalization, citizenship and identity to better prepare for citizenship in a globalizing world.
Should we embrace globalization?
Social Studies 20-1 Perspectives on Nationalism 5-credits
Students will explore the complexities of nationalism in Canadian and international contexts. They will study the origins of nationalism and the influence of nationalism on regional, international and global relations. The infusion of multiple perspectives will allow students to develop understandings of nationalism and how nationalism contributes to the citizenship and identities of peoples in Canada. While nationalism has historically examined the relationship of the citizen to the state, contemporary understandings of nationalism include evolving individual, collective, national and state realities. Exploring the complexities of nationalism will contribute to an understanding and appreciation of the interrelationships among nation, nationalism, internationalism, globalization, and citizenship and identity. Developing perspectives of others will encourage students to develop personal and civic responses to emergent issues related to nationalism.
To what extent should we embrace nationalism?
Social Studies 20-2 Understandings of Nationalism 5-credits
Student s will examine historical and contemporary understandings of nationalism in Canada and the world. They will explore the origins of nationalism as well as the impacts of nationalism on individuals and communities in Canada and other locations. Examples of nationalism, ultranationalism, supernaturalism, and internationalism will be examined from multiple perspectives. Students will develop personal and civic responses to emergent issues related to nationalism. As perspectives on personal identity continue to evolve, so do understandings of nationalism and what it means to be a member of a collective, community, state and nation. This evolution is significant in the Canadian context as nationalism contributes to an appreciation and awareness of the interrelationships among nationalism, internationalism, citizenship and identity.
Should we embrace Nationalism?
Social Studies 30-1 Perspectives on Ideology 5 credits
Students will explore the origins and complexities of ideologies and examine multiple perspectives regarding the principles of classical and modern liberalism. An analysis of various political and economic systems will allow students to assess the viability of the principles of liberalism. Developing understanding of the roles and responsibilities associated with citizenship will encourage students to respond to emergent global issues.
To what extent should we embrace an ideology?
Social Studies 30-2 Understandings of Ideology 5 credits
Students will examine the origins, values and components of competing ideologies. They will explore multiple perspectives regarding relationships among individualism, liberalism, common good and collectivism. An examination of various political and economic systems will allow students to determine the viability of the values of liberalism. Developing understandings of the roles and responsibilities associated with citizenship will encourage students to respond to emergent global issues.
To what extent should we embrace an ideology? |
Quetext About FAQ Contact INTRODUCTION OF CYTOKINES: The term ‘Cytokine’ originates from mixture of two Greek words ‘cyto’ meaning ‘cell’ and ‘kinos’ meaning ‘movement’.These are signalling molecules that help in cell-cell communication in immune response and stimulate the movement of cells toward the sites of infections, trauma and inflamation. Cytokines are composed mostly of peptides, proteins and glycoproteins. These are obtained from a wide range of immune cells such as lymphocytes, macrophages and mast cells as well as fibroblasts, endothelial cells and stromal cells. Types of Cytokines: Following are different types of cytokines: 1. Chemokines: These are the type of cytokines that bring cells to the site of infection utilizing the process
Create a 1 page study guide that will show an understanding of transportation across plasma membranes, cell respiration, and protein synthesis. The plasma membrane surrounding cells is where the exchange of substances inside and outside of cells takes place. Some substances need to move from the extracellular fluid outside cells to the inside of the cell, and some substances need to move from the inside of the cell to the extracellular fluid. Some of the proteins that are stuck in the plasma membrane help to form openings (channels) in the membrane. Through these channels, some substances such as hormones or ions are allowed to pass through.
Activation of the complement cascade triggers opponisation, chemotaxis, inflammation and increased capillary permeability, and cytolysis. Three different pathways can be used to activate the complement system which are the classical pathway, alternative pathway, and mannose-binding lectin (MBL) pathway. The alternative and lectin pathway are initiated by microbes in the absence of antibody. Classical pathways however is initiated by certain isotypes of antibody attached to antigens. Activation of each of the pathways causing the activation of C3, the most abundant complement protein in the plasma, which cleave into C3b, a larger fragment and a smaller fragment C3.
Most actin molecules work together to give support and structure to the plasma membrane and are therefore found near the cell membrane. Can generate locomotion in cells such as white blood cells and the amoeba, to provide phagocytosis. interact with myosin ("thick") filaments in skeletal muscle fibers to provide the force of muscular contraction. [pic] [pic] ● Intermediate Filaments These cytoplasmic fibers average 10 nm in diameter (and thus are "intermediate" in size between actin filaments (8 nm) and microtubules (25 nm). Intermediate filaments play similar roles in the cell: providing a supporting framework within the cell.
Signal transduction happens when a membrane protein may have a binding site with a specific shape that fits the shape of a chemical messenger, such as hormones & other extracellular substances that trigger changes in cellular activity. Another function is cell to cell recognition which occurs when some proteins serve as identification tags that are specifically recognized by other cells. The Cell Wall is a rigid structure mainly made out of the protein cellulose, a tough chemical that helps plants to maintain their shape
Structure – The nucleus is encapsulated and protected by the nuclear envelope, which is a double lipid bilayer. Within the envelope is the nucleoplasm, which holds chromatin, a complexity of proteins and DNA, and in the center of the nucleus is the nucleolus. The nucleus also has pores on the outer membrane of the nuclear envelope, which regulate the entry and exit of certain macromolecules (Campbell, 2005, pg.102). Lysosome Function – Lysosomes are membranous sacs filled with enzymes that are used to digest different kinds of macromolecules within a cell (Campbell, 2005, pg.107). Lysosomes are essentially a digestive system for the cell both breaking down materials taken in from outside the cell and breaking down obsolete components of the cell itself (Cooper).
Anatomy & Physiology M and W 6:15–9:15 pm Introduction Many chemical reactions take place in each individual human cell, all performing the necessary functions for such a large, complex, multicellular organism. How do these reactions occur? Chemical reactions involve the breaking and reforming of chemical bonds between molecules (substrates), which are transformed into different molecules (products). Enzymes are biological catalysts. They help to increase the rate of chemical reactions.
These can then be transported in the appropriate form to the cells in the body through the circulatory system. For growth and repair of our cells and tissues energy is required, this is due to the biochemical reactions which build large molecules from simpler ones to occur. This energy is then needed in order to build proteins from amino acids, these are formed through the process of Active transport of substances in or out of our cells happens through this energy made, an example of this would be the transport of amino acids from the small intestine into the blood stream. Active transport often takes place against a diffusion gradient which then allows the body to control its internal environment more efficiently. When we move our body uses energy, this occurs on several levels: • inside our cells – chromosome • whole cells – sperm swimming • tissues – muscles contracting • whole organs – heart beating • part or whole organisms – walking Since the blood found in the human body is warm energy is used in order to maintain the temperature, we use 70% of this energy from respiration to do so and this makes sure the temperature stays at 37
This occurs when a segment of the plasma membrane surrounds a particle or large molecule, encloses it,and brings it into the cell. 1\'10 very important types of endocytosis are phagocytosisand pinocytosis. During phagocytosis, cellular projections called pseudopods engulf particles and bring them into the cel1. Phagocytosis is used by white blood cells to destroy bacteria and foreign substances (see Figure 16.8, page 461, and further discussion in Chapter 16). In pinocytosis, the plasma membrane folds inward, bringing extracellular fl uid into the cell, along with whatever substances are dissolved in the fluid .
Enzymes are proteins folded into complex shapes that allow smaller specific molecules to fit into them like a lock and key. The place where the substrate molecules fit is called the active site; chemical reactions occur at the active site. Catalase is found in all cells and protects them from a dangerous waste chemical called Hydrogen Peroxide. It breaks the hydrogen peroxide down to water and oxygen The substrate (hydrogen peroxide) and the catalase molecules are continuously on the move. Every so often they will collide so that the substrate molecule(s) fits into the enzyme's active site. |
What are Net Carbs?
In recent years, carb counting has become a major point of dietary emphasis. With many low-carb diets such as keto and Atkins becoming more commonplace, it’s crucial to account for carbohydrates properly.
The problem is, there’s an ongoing debate between whether carbs or “net carbs” should be counted as part of one’s macronutrient profile. While some groups argue total carb count is a more precise measurement, others disagree with this sentiment.
Not all carbohydrates have the same effect from a dietary perspective. While some are more digestible, others tend to pass through the body without being absorbed.
It’s important to understand the differences between carbs and net carbs so that you can determine which form of measurement is most conducive to your lifestyle and goals.
Let’s take a deeper look at various types of carbs and what roles they play within your body.
Let’s Define a Carbohydrate
Before we dive into net carbs, it helps to know what a carb actually is in the first place.
Carbohydrates are an umbrella term for molecules containing carbon, hydrogen, and oxygen atoms. There are two main types of carbs found in the foods we eat—simple carbs and complex carbs.
Simple carbs are found mostly in sugary foods and contain only one or two sugar unit molecules which can affect how quickly the food is digested and absorbed. Some examples are fruits and foods containing table sugar, like soda or cookies.
Complex carbs are slower-digesting in nature and contain several sugar units linked together. They are often found in whole grains, starchy vegetables, white and sweet potatoes, carrots, and oats.
While simple and complex carbs can be used as an energy source or stored as fat.
If a person consumes more carbs than needed, the body will convert excess carbs to fat.
There are other types of carbs which are not readily digestible by the body.
Fiber is different from the other two forms of carbohydrates. While it is similar in molecular profile, it does not provide a direct form of energy—it passes through the body without being digested and absorbed into the bloodstream for energy. The main role of fiber is to feed friendly bacteria in the digestive system.
Sugar alcohols also fall under the carbohydrate umbrella. They are typically used as a form of sweetener and contain only half the amount of calories as traditional carbohydrates. They are added to food as a reduced calorie sweetener and as a bulking agent.
Although each of these are considered carbs, the body handles each of them differently. It’s these differences that allow us to think that not all carbs are created equal, and we shouldn’t look at them as all playing the same role without our body.
Net Carbs Explained
Net carbs refers to carbs that are absorbed and processed by the body.
Simple and complex carbs are found in foods we eat. They are broken down in the small intestine and later become used as a source of energy in the body.
Those other types of carbs, such as fibers and sugar alcohols, can’t be broken as easily.
Because our bodies don’t actually absorb these types of carbs (to use them for energy), many people subtract fibers and sugar alcohols from overall carbohydrate amount.
This is often where debate tends to arise.
While some count every single carb in their diet, others subtract fiber and sugar alcohols because the body does not retain these macronutrients in the same manner.
Why is Fiber Different?
Unlike other forms of carbohydrates, fiber is not directly used as a natural fuel source for the body. It passes directly into the colon and can’t be broken down by enzymes in the digestive tract. Because of this, less than half the total carbohydrates from dietary fiber are metabolized to glucose.1
Fiber is best known for its ability to relieve constipation, especially soluble fiber (hello fruit, oatmeal, avocados and broccoli!), but can also provide several other health benefits.
Fiber consumption may be linked to lowering the risk of developing serious diseases.
The FDA Daily Value for fiber (the daily recommended amount) is 25 grams per day based on a 2000 calorie diet.
Even within fiber, there are two main types—insoluble and soluble fiber.
Insoluble fiber does not dissolve in water and can help speed the passage of bowel movements thereby preventing constipation. It contains no calories, nor does it spike blood glucose or insulin levels, and isn’t broken down by the gut.4
Insoluble fiber helps keep bowel movements regular and helps maintain a healthy digestive system. It’s typically found in the stalks, skins, and seeds of foods such as whole grains, nuts, and veggies,
Common foods with insoluble fiber include:
Insoluble fiber is an important part of a healthy diet that can help support several bodily functions.
Soluble fiber dissolves in water and is digested by bacteria in the large intestine.
One of the benefits of soluble fiber is its ability to help you feel full (and potentially, this can help you lose weight). A study performed on soluble fiber found that consuming 14g per day was associated with a 10% decrease in energy consumption (less food eaten) and weight loss of 1.9kg over a four month period.5
As soluble fiber goes into your colon, it becomes short-chain fatty acids, which can help improve gut health and reduce inflammation.6 A meta analysis also looked at those with high fiber consumption and found soluble fiber can also help reduce the risk of developing type 2 diabetes.7
If you’re on the keto diet, it can sometimes be difficult to find low-carb sources of fiber. Good thing H.V.M.N. has you covered.
Both our MCT Oil Powder and Keto Collagen+ contain a base of acacia fiber, which is rich in soluble fiber. The best part? Both products contain zero net carbs, making them a no-brainer to add to your daily nutrition strategy for a great source of fiber and keto energy.
What are Sugar Alcohols?
Sugar alcohols are processed in a manner similar to fiber—they are not directly absorbed by the body. They’re found naturally in foods and can be used as low calorie sweeteners and bulking agents. Typically, they are used as sugar substitutes that contain about half the amount of calories as regular sugar.
As the name suggests, they’re a hybrid of sugar molecules and alcohol molecules.
Their chemical structure is similar to sugar, and thus have the ability to activate sweet taste receptors on your tongue. That’s part of their allure: sweet taste, far fewer calories.
You will typically find sugar alcohols in foods such as chewing gums, ice creams, frostings, cakes, cookies, candies, as well as some foods that claim to be low in carbs or sugar.
The most common sugar alcohols used today include:
Erythritol has the least amount of net carbs. 90% of it is excreted in urine and only 10% enters the colon.8
There are limited studies available on sugar alcohols, but no known studies have shown raised insulin or blood sugar levels as a result.9 Studies have shown, however, that some individuals do not process sugar alcohols well and report excessive gas and sometimes, diarrhea.10 This is because sugar alcohols are fermented by the gut microbiome (fermentation produces gas as a by product) and because they affect the osmolarity within your intestine (they cause excess water to end up in your stool/colon). This response often depends on the amount consumed, so if you’re thinking of adding sugar alcohols to your diet or increasing the amount, do so cautiously.
How to Calculate Net Carbs
If you choose to use net carbs as a basis of your dietary calculations for macronutrients, it will help to make sure you are accurately accounting for them. Net carbs are calculated differently for both fiber and sugar alcohols. Be sure to read nutrition labels closely as “net carbs” are not listed separately.
Net Carbs from Fiber
Calculating net carbs using both carbohydrate and fiber amounts is super simple.
If you are eating whole foods containing fiber, simply subtract the fiber from total carbs to calculate the net carbs.
For example, an apple contains 25g of carbohydrates and 5g of fiber. The result would be 20g of net carbs. This is a little more difficult when consuming those whole foods because they don’t have nutrition labels. But a simple online search should help you give you a pretty accurate estimation.
If you’re consuming foods with a nutrition label, both carbs and fiber should be listed and thus, net carbs easily calculated.
Net Carbs from Sugar Alcohols
In most cases, half the carbs from sugar alcohols can be subtracted from total carbs. For example, if a food contains 8g of sugar alcohols, you can subtract 4g from total carbs to determine net carbs.
One exception to the rule is Erythritol. The carbs from Erythritol can completely be subtracted from total carbs.
Most of the time, you’ll be subtracting fiber from carbohydrate amount to determine net carbs. Sugar alcohols are less common, but check nutrition labels to see if what you’re eating contains them, and make this part of your calculation when determining net carbs
Should You Use Net Carbs?
People debate about whether counting net carbs provides a more accurate representation compared to total carbs.
Using net carbs will allow for more dietary flexibility because you’re able to eat fiber-rich foods without consuming too many carbohydrates. There are also numerous health benefits associated with fiber consumption; a study performed on individuals who regularly consumed fiber showed improved blood sugar levels and lower cholesterol.8
On the other hand, people on a keto diet may argue against net carbs because the carbohydrate amount you’re consuming may take you out of ketosis. Not all people process fiber the same way—therefore, it’s important to understand what works best for your body.
People trying to avoid carbohydrate intake may tend to eat more sugar-free problems, which can lead to other health problems such as weight gain, metabolic disorders, and type-2 diabetes.11
When it comes down to it, using net carbs can be an imperfect science. Using total carbs can help provide a better framework for helping you to stick to your diet. However, if you eat lots of fibrous foods such as vegetables, using net carbs may be the perfect choice for your lifestyle. Or, if you’re on keto, and you find yourself constipated, consuming more fiber might be advantageous. You may have avoided those carbs to stick to your macros, and in the process, avoided fiber as well.
No matter if you choose to use net carbs or not, always make the dietary choices that best fit your individual lifestyle and goals. The best diet will always be the one you can stick to long-term.
Originally published on HVMN by Ryan Rodal
1.Wheeler ML, Pi-sunyer FX. Carbohydrate issues: type and amount. J Am Diet Assoc. 2008;108(4 Suppl 1):S34-9.2.Kunzmann AT, Coleman HG, Huang WY, Kitahara CM, Cantwell MM, Berndt SI. Dietary fiber intake and risk of colorectal cancer and incident and recurrent adenoma in the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial. Am J Clin Nutr. 2015;102(4):881-90.3.Threapleton DE, Greenwood DC, Evans CE, et al. Dietary fibre intake and risk of cardiovascular disease: systematic review and meta-analysis. BMJ. 2013;347:f6879.4.Lattimer JM, Haub MD. Effects of dietary fiber and its components on metabolic health. Nutrients. 2010;2(12):1266-89.5.Howarth NC, Saltzman E, Roberts SB. Dietary fiber and weight regulation. Nutr Rev. 2001;59(5):129-39.6.Holscher HD. Dietary fiber and prebiotics and the gastrointestinal microbiota. Gut Microbes. 2017;8(2):172-184.7.Mcrae MP. Dietary Fiber Intake and Type 2 Diabetes Mellitus: An Umbrella Review of Meta-analyses. J Chiropr Med. 2018;17(1):44-53.8.Arrigoni E, Brouns F, Amadò R. Human gut microbiota does not ferment erythritol. Br J Nutr. 2005;94(5):643-6.9.Hyams JS. Sorbitol intolerance: an unappreciated cause of functional gastrointestinal complaints. Gastroenterology. 1983;84(1):30-3.10.Mäkinen KK. Gastrointestinal Disturbances Associated with the Consumption of Sugar Alcohols with Special Consideration of Xylitol: Scientific Review and Instructions for Dentists and Other Health-Care Professionals. Int J Dent. 2016;2016:5967907.11.Harpaz D, Yeo LP, Cecchini F, et al. Measuring Artificial Sweeteners Toxicity Using a Bioluminescent Bacterial Panel. Molecules. 2018;23(10) |
- Types & Causes
What are muscle cramps?
A muscle cramp is an involuntarily and forcibly contracted muscle that does not relax. When we use the muscles that can be controlled voluntarily, such as those of our arms and legs, they alternately contract and relax as we move our limbs. Muscles that support our head, neck, and trunk contract similarly in a synchronized fashion to maintain our posture. A muscle (or even a few fibers of a muscle) that involuntarily (without consciously willing it) contracts is in a "spasm." If the spasm is forceful and sustained, it becomes a cramp. Muscle cramps often cause a visible or palpable hardening of the involved muscle.
Muscle cramps can last from a few seconds to a quarter of an hour or occasionally longer. It is not uncommon for a cramp to recur multiple times until it finally resolves. The cramp may involve a part of a muscle, the entire muscle, or several muscles that usually act together, such as those that flex adjacent fingers. Some cramps involve the simultaneous contraction of muscles that ordinarily move body parts in opposite directions.
Muscle cramps are extremely common. Almost everyone (one estimate is about 95%) experiences a cramp at some time in their life. Muscle cramps are common in adults and become increasingly frequent with aging. However, children also experience cramps in their muscles.
Any of the muscles that are under our voluntary control (skeletal muscles) can cramp. Cramps of the extremities, especially the legs and feet (including nocturnal leg cramps), and most notably the calf (the classic "charley horse"), are very common. Involuntary muscles of the various organs (uterus, blood vessel wall, bowels, bile and urine passages, bronchial tree, etc.) are also subject to cramps. Cramps of the involuntary muscles will not be further considered in this review. This article focuses on cramps of skeletal muscle.
What are the types and causes of muscle cramps?
Skeletal muscle cramps can be categorized into four major types. These include
- "True" cramps
- Dystonic cramps
Cramps are categorized according to their different causes and the muscle groups they affect.
True cramps involve part or all of a single muscle or a group of muscles that generally act together, such as the muscles that flex several adjacent fingers or the leg muscles.
- Most authorities agree that true cramps are caused by hyperexcitability of the nerves that stimulate the muscles.
- They are overwhelmingly the most common type of skeletal muscle cramps.
- True cramps can occur in a variety of circumstances as follows.
Other true cramps include:
- Injury: Persistent muscle spasms may occur as a protective mechanism following an injury, such as a broken bone. In this instance, the spasm tends to minimize movement and stabilize the area of injury. Injury of the muscle alone may cause the muscle to spasm.
- Vigorous activity: True cramps are commonly associated with the vigorous use of muscles and muscle fatigue (in sports or with unaccustomed activities). Such cramps may come during the activity or later, sometimes many hours later. Likewise, muscle fatigue from sitting or lying for an extended period in an awkward position or any repetitive use can cause cramps. Older adults are at risk for cramps when performing vigorous or strenuous physical activities.
- Rest cramps: Cramps at rest are very common, especially in older adults, but may be experienced at any age, including childhood. Rest muscle cramps often occur at night. While not life-threatening, night cramps (commonly known as nocturnal cramps) can be painful, and disruptive of sleep, and they can recur frequently (that is, many times a night, and/or many nights each week). The actual cause of night cramps is unknown. Sometimes, such cramps are initiated by making a movement that shortens the muscle. An example is pointing the toe down while lying in bed, which shortens the calf muscle of the leg, a common site of muscle cramps.
- Dehydration: Sports and other vigorous activities, including activities of endurance athletes, can cause excessive fluid loss from perspiration. This kind of dehydration increases the likelihood of true cramps. These cramps are more likely to occur in warm weather and can be an early sign of heatstroke. Chronic volume depletion of body fluids from diuretics (medicine that promotes urination) and poor fluid intake both lead to dehydration and may act similarly to predispose to cramps, especially in older people. Sodium depletion has also been associated with cramps. Loss of sodium, the most abundant chemical constituent of body fluids outside the cell, is usually a function of dehydration.
- Body fluid shifts: True cramps also may be experienced in other conditions that feature an unusual distribution of body fluids. An example is cirrhosis of the liver, which leads to the accumulation of fluid in the abdominal cavity (ascites). Similarly, cramps are a relatively frequent complication of the rapid body fluid changes that occur during dialysis for kidney failure.
- Low blood calcium or magnesium: Low blood levels of either calcium or magnesium directly increase the excitability of both the nerve endings and the muscles they stimulate. This may be a predisposing factor for the spontaneous true cramps experienced by many older adults, as well as for those muscle cramps that are commonly noted during pregnancy. Low levels of calcium and magnesium are common in pregnant women unless these minerals are supplemented in the diet. Cramps are seen in any circumstance that decreases the availability of calcium or magnesium in body fluids, such as taking diuretics, hyperventilation (over-breathing), excessive vomiting, inadequate calcium and/or magnesium in the diet, inadequate calcium absorption due to vitamin D deficiency, poor function of the parathyroid glands (tiny glands in the neck that regulate calcium balance), and other conditions.
- Low potassium: Low potassium blood levels occasionally cause muscle cramps, although it is more common for low potassium to be associated with muscle weakness.
In tetany, all of the nerve cells in the body are activated, which then stimulate the muscles. This reaction causes spasms or cramps throughout the body. The name tetany is derived from the effect of the tetanus toxin on the nerves. However, the name is now commonly applied to muscle cramping from other conditions, such as low blood levels of calcium and magnesium. Low calcium and low magnesium, which increase the activity of nerve tissue nonspecifically, also can produce tetanic cramps. Often, such cramps are accompanied by evidence of hyperactivity of other nerve functions in addition to muscle stimulation. For instance, low blood calcium not only causes spasms in the muscles of the hands and wrists, but it can also cause a sensation of numbness and tingling around the mouth and other areas.
Sometimes, tetanic cramps are indistinguishable from true cramps. The accompanying changes of sensation or other nerve functions that occur with tetany may not be apparent because the cramping pain is masking or distracting from it.
The final category is dystonic cramps, in which muscles that are not needed for the intended movement are stimulated to contract. Muscles that are affected by this type of cramping include those that ordinarily work in the opposite direction of the intended movement, and/or others that exaggerate the movement. Some dystonic cramps usually affect small groups of muscles (eyelids, jaws, neck, larynx, etc.). The hands and arms may be affected during the performance of repetitive activities such as those associated with handwriting (writer's cramp), typing, playing certain musical instruments, and many others. Each of these repetitive activities may also produce true cramps from muscle fatigue. Dystonic cramps are not as common as true cramps.
A contracture is a condition that may mimic a muscle cramp.
A contracture is a scarring of the soft tissues that muscle movements normally affect. When a contracture is present, the tissue that is involved cannot move completely, whether the corresponding muscle is activated or relaxed. This is because the scarred tissue cannot move in response to muscle movements. This leads to a fixed body part with a loss of full range of motion. The most common type of contracture occurs in the palm and affects the tendons which normally cause the fingers to close with gripping. Most commonly, this form of contracture affects the ring finger. This contracture is known as Dupuytren's contracture of the hand.
Do all muscle cramps fit into the above categories?
No. Not all cramps are readily categorized in a preceding manner since these categories best apply to cramps that make up an individual's major muscle problem. Many cramps are a relatively minor part of nerve and muscle diseases; other muscle symptoms are usually more prominent in these diseases.
Some examples include:
- Amyotrophic lateral sclerosis (Lou Gehrig's disease) with weakness and muscle wasting
- Radiculopathy (spinal nerve irritation or compression from various causes) with pain
- Distortion or loss of sensation, and/or weakness
- Diseases of the peripheral nerves, such as diabetic neuropathy, with distorted and diminished sensation and weakness
- Several primarily dystonic muscle diseases
Can medications cause muscle cramps?
Numerous medicines can cause cramps. Potent diuretic medications, such as furosemide (Lasix), or the vigorous removal of body fluids, even with less potent diuretics, can induce cramps by depleting body fluid and sodium. Simultaneously, diuretics often cause the loss of potassium, calcium, and magnesium, which can also cause cramps.
Medications such as donepezil (Aricept, used for Alzheimer's disease) and neostigmine (Prostigmine and others, used for myasthenia gravis), as well as raloxifene (Evista, used to prevent osteoporosis in postmenopausal women), can cause cramps.
- Tolcapone (Tasmar, used for Parkinson's disease) reportedly causes muscle cramps in at least 10% of patients.
- True cramps have been reported with nifedipine (Procardia and others, used for angina, high blood pressure, and other conditions) and
- the asthma drugs terbutaline (Brethine) and albuterol (Proventil, Ventolin, and others).
- Some medicines used to lower cholesterol, such as lovastatin (Mevacor), can also lead to cramps.
Cramps are sometimes noted in addicted individuals during withdrawal from medications and substances that have sedative effects, including alcohol, barbiturates, and other sedatives, anti-anxiety agents such as benzodiazepines (for example, diazepam [Valium] and alprazolam [Xanax]), narcotics, and other drugs.
Can vitamin deficiencies cause muscle cramps?
Several vitamin deficiency states may directly or indirectly lead to muscle cramps. These include:
The precise role of deficiency of these vitamins in causing cramps is unknown.
Can poor circulation cause muscle cramps?
Poor circulation to the leg muscles, which results in inadequate oxygen to the muscle tissue, can cause severe pain in the leg muscle (sometimes known as claudication pain or intermittent claudication) that occurs with walking or exercise. This commonly occurs in the calf muscles. While the pain feels virtually identical to that of a severely cramped muscle, the pain does not seem to be a result of the actual muscle cramping. This pain may be due to the accumulation of lactic acid and other chemicals in the muscle tissues. It's important to see your doctor if you have pain like this.
What are the symptoms of common muscle cramps?
Characteristically, a cramp is painful, often severely so. Usually, the sufferer must stop whatever activity is underway and seek relief from the cramp; the person is unable to use the affected muscle while it is cramping.
Severe cramps may be associated with soreness and swelling, which can occasionally persist up to several days after the cramp has subsided. At the time of cramping, the knotted muscle will bulge, feel very firm, and maybe tender.
What types of doctors treat muscle cramps?
Because there are so many different causes and types of muscle cramps, many different medical specialists may be involved in their treatment. Most commonly, patients would consult their primary care provider, including specialists in internal medicine or family medicine.
If the cramps are the result of a sudden injury or illness, emergency medicine specialists would treat the patient. Cramps due to specific medical conditions may be treated by different specialists, including:
- Sports-medicine specialists
How are muscle cramps diagnosed?
There are no special tests for cramps. Nevertheless, the diagnosis of muscle cramps is relatively easy. Most people know what cramps are and when they have one. If present during a cramp, the doctor, or any other bystander, can feel the tense, firm bulge of the cramped muscle.
What are treatments and home remedies for skeletal muscle cramps?
Most cramps can be stopped if the muscle can be stretched. For many cramps of the feet and legs, this stretching can often be accomplished by standing up and walking around.
- For a calf muscle cramp, the person can stand about 2 to 2.5 feet from a wall (possibly farther for a tall person) and lean into the wall to place the forearms against the wall with the knees and back straight and the heels in contact with the floor. (It is best to learn this maneuver at a time when you don't have the cramp.)
- Another technique involves flexing the ankle by pulling the toes up toward the head while still lying in bed with the leg as straight as possible.
- For the writer's cramp (contractures in the hand), pressing the hand on a wall with the fingers facing down will stretch the cramping finger flexor muscles.
Gently massaging the muscle will often help it to relax, as will applying warmth from a heating pad or hot soak. If the cramp is associated with fluid loss, as is often the case with vigorous physical activity, fluid, and electrolyte (especially sodium and potassium) replacement are essential. Medicines generally are not needed to treat an ordinary cramp that is active since most cramps subside spontaneously before enough medicine is absorbed to even have an effect.
Medical treatment for muscle cramps
Muscle relaxant medications may be used over the short term in certain situations to relax muscle cramps due to an injury or other temporary event. These medications include cyclobenzaprine (Flexeril), orphenadrine (Norflex), and baclofen (Lioresal).
In recent years, injections of therapeutic doses of botulism toxin (Botox) have been used successfully for some dystonic muscle disorders that are localized to a limited group of muscles. A good response may last several months or more, and the injection may then be repeated.
The treatment of cramps that are associated with specific medical conditions generally focuses on treating the underlying condition. Sometimes, additional medications specifically for cramps are prescribed for certain conditions.
Of course, if cramps are severe, frequent, persistent, respond poorly to simple treatments, or are not associated with an obvious cause, then the patient and the doctor need to consider the possibility that more intensive treatment is indicated or that the cramps are a manifestation of another disease. As described above, the possibilities are extremely varied and include problems with circulation, nerves, metabolism, hormones, medications, and nutrition. It is uncommon for muscle cramps to occur as the result of a medical condition without other obvious signs that the medical condition is present.
Cramps are inevitable, but if possible, it would be best to prevent them.
Subscribe to MedicineNet's General Health Newsletter
What is the prognosis of recurrent muscle cramps?
Although cramps can be a great nuisance, they are a benign condition. Their importance is limited to the discomfort and inconvenience they cause, or to the diseases associated with them. Careful attention to the preceding recommendations will greatly diminish the problem of cramps for most individuals. Those with persistent or severe muscle cramps should seek medical attention.
Is it possible to prevent muscle cramps during the activity?
During the activity: Authorities recommend stretching before and after exercise or sports, along with an adequate warm-up and cooldown, to prevent cramps that are caused by vigorous physical activity.
- Good hydration before, during, and after the activity is important, especially if the duration exceeds one hour, and
- replacement of lost electrolytes (especially sodium and potassium, which are major components of perspiration) can also be helpful.
- Excessive fatigue, especially in warm weather, should be avoided.
How much should I drink to prevent muscle cramps?
How much should I drink? Hydration guidelines should be individualized for each person. The goal is to prevent excessive weight loss (>2% of body weight). You should weigh yourself before and after exercise to see how much fluid you lose through sweat. One liter of water weighs 2.25 pounds. Depending on the amount of exercise, temperature and humidity, body weight, and other factors, you can lose anywhere from approximately .4 to 1.8 liters per hour.
Pre-exercise hydration (if needed):
- 0.5 liters per hour for a 180-pound person several hours (three to four hours) prior to exercise.
- Consuming beverages with sodium and/or small amounts of salted snacks or sodium-containing foods at meals will help to stimulate thirst and retain the consumed fluids.
- Suggested starting points for marathon runners are 0.4 to 0.8 liters per hour, but again, this should be individualized based on body weight loss.
- There should be no more than 10% carbohydrate in the beverage, and 7% has generally been considered close to optimal. Carbohydrate consumption is generally recommended only after one hour of exertion.
- Electrolyte repletion (sodium and potassium) can help sustain electrolyte balance during exercise, particularly when
- there is inadequate access to meals or meals are not eaten,
- physical activity exceeds four hours in duration, or
- during the initial days of hot weather.
Under these conditions, adding modest amounts of salt (0.3 g/L to 0.7 g/L) can offset the salt loss in sweat and minimize medical events associated with electrolyte imbalances (for example, muscle cramps, hyponatremia).
- Drink approximately 0.5 liters of water for every pound of body weight lost.
- Consuming beverages and snacks with sodium will help expedite rapid and complete recovery by stimulating thirst and fluid retention.
Is it possible to prevent muscle cramps during pregnancy?
Supplemental calcium and magnesium have each been shown to help prevent cramps associated with pregnancy. An adequate intake of both of these minerals during pregnancy is important for this and other reasons, but supervision by a qualified healthcare professional is essential.
While experiencing dystonic cramps
Cramps that are induced by repetitive non-vigorous activities can sometimes be prevented or minimized by careful attention to ergonomic factors such as wrist supports, avoiding high heels, adjusting chair position, activity breaks, and using comfortable positions and equipment while performing the activity. Learning to avoid excessive tension while executing problem activities can help. However, cramps can remain very troublesome for activities that are difficult to modify, such as playing a musical instrument.
Is it possible to prevent rest cramps?
Nocturnal cramps and other rest cramps can often be prevented by regular stretching exercises, particularly if done before going to bed. Even the simple calf-stretching maneuver (described in the first paragraph of the section on treatment), if held for 10 to 15 seconds and repeated two or three times just before going to bed, can be a great help in preventing nocturnal leg cramps. The maneuver can be repeated each time you get up to go to the bathroom during the night and also once or twice during the day. If nocturnal leg cramps are severe and recurrent, a footboard can be used to simulate walking even while recumbent and may prevent awkward positioning of the feet during sleep. Ask your doctor about this remedy.
Another important aspect of the prevention of night cramps is adequate calcium and magnesium. Blood levels may not be sensitive enough to accurately reflect what is happening at the tissue surfaces where the hyperexcitability of the nerve occurs. Calcium intake of at least 1 gram daily is reasonable, and 1.5 grams may be appropriate, particularly for women with or at risk for osteoporosis. An extra dose of calcium at bedtime may help prevent cramps.
Supplemental magnesium may be very beneficial for some, particularly if the person has a magnesium deficiency. However, added magnesium can be very hazardous for people who have difficulty eliminating magnesium, as happens with kidney insufficiency. The vigorous use of diuretics usually increases magnesium loss, and high levels of calcium intake (and therefore calcium excretion) tend to increase magnesium excretion. Magnesium is present in many foods (greens, grains, meat and fish, bananas, apricots, nuts, and soybeans) and some laxatives and antacids, but a supplemental dose of 50-100 milligrams of magnesium daily may be appropriate. Splitting the dose and taking a portion several times during the day minimizes the tendency to diarrhea that magnesium can cause.
Vitamin E has also been said to help minimize cramp occurrence. Scientific studies documenting this effect are lacking, but anecdotal reports are common. Since vitamin E is thought to have other beneficial health effects and is not toxic in usual doses, taking 400 units of vitamin E daily is approved, recognizing that documentation on its effect on cramps is lacking.
How can older adults prevent muscle cramps?
Older adults should have periodic magnesium blood levels taken if they use supplemental magnesium. Even a mild and otherwise not apparent degree of kidney dysfunction, which is often seen in this age group, may lead to toxic levels of magnesium with modest doses.
Recent studies have indicated that vitamin D (a vitamin required for the normal absorption of calcium from food) deficiency is common in some elderly individuals. Consequently, vitamin D replacement is important for these people, taking appropriate care to avoid excessive vitamin D levels, as these are toxic. An intake of at least 400 units daily has been recommended in the past; more recently, experts have questioned whether this dose of vitamin D is sufficient, especially for people with little or no sun exposure (sunlight promotes the formation of vitamin D in the body). However, excessive doses of vitamin D are known to be toxic. The upper limit of dosing for vitamin D supplementation has been recommended as 2,000 IU daily. Your healthcare professional can help you decide how much vitamin D you should take, taking your situation and medical history into account.
While the more potent diuretics are associated with an increased loss of calcium and magnesium, hydrochlorothiazide (HydroDIURIL and others) and related diuretics are associated with calcium and magnesium retention. Diuretics are commonly used for the treatment of hypertension and heart failure. If cramps (or osteoporosis) are also a problem, the patient and doctor may consider using hydrochlorothiazide or another thiazide type of diuretic if otherwise feasible and appropriate.
Diuretics also cause sodium depletion and most also cause potassium depletion. Many patients who use diuretics are also on sodium-restricted diets. Careful attention to the effects of diuretics on sodium and potassium, and replacement of these elements as needed, is always appropriate, even more so if cramps are a problem.
Older adults often do not hydrate themselves adequately, partly because the sense of thirst diminishes with age. This situation is exaggerated in those who are treated with diuretics. For some, simply increasing fluid intake to the generally recommended six to eight glasses a day will improve the cramps. However, drinks with caffeine should not be counted since they act on the kidneys to increase fluid loss. Individuals who are on restricted fluid intake should consult their doctor on this issue and must not ignore their recommended fluid intake limits.
As for night cramps, the exact cause is often difficult to determine. The best prevention involves stretching regularly, adequate fluid intake, appropriate calcium and vitamin D intake, supplemental vitamin E, and
Are there medications to prevent muscle cramps?
In recent times, the only medication that has been widely used to prevent, and sometimes also to treat, cramps is quinine. Quinine has been used for years in the treatment of malaria. Quinine acts by decreasing the excitability of the muscles. It has also been shown to be effective in many, but not all, scientific studies.
However, quinine also causes birth defects and miscarriages as well as serious side effects. It has also occasionally caused hypersensitivity reactions and a deficiency of platelets, which are the blood components responsible for clotting. Either of these reactions can be fatal.
Consequently, quinine tablets are not available in the United States. Quinine is available in grocery stores in tonic water. The U.S. FDA does not recommend or endorse the use of quinine to treat or prevent muscle cramps. Nevertheless, quinine is sometimes recommended as quinine water (tonic water) before bedtime to prevent night muscle cramps. Always consult your healthcare professional before taking quinine for cramps.
"Muscle Spasms, Cramps, and Charley Horse." WebMD.com. Mar. 31, 2017. <https://www.webmd.com/pain-management/muscle-spasms-cramps-charley-horse>.
United States. Food and Drug Administration. "FDA Drug Safety Communication: New Risk Management Plan and Patient Information Guide for Qualapin (Quinine Sulfate)." July 8, 2010.
Top Muscle Cramps Related Articles
Baclofen PumpThe medication baclofen treats symptoms of spasticity in patient with MS. Side effects of baclofen include sleepiness, dizziness, nausea, headache, and confusion.
Fibromyalgia Treatments and Tips to Ease PainWhat is fibromyalgia? Learn about fibromyalgia symptoms such as trigger points (also called tender points), learn what causes fibromyalgia, and get treatment options for the condition like stress relief techniques, exercise tips, diet ideas, and other strategies that don't require medication.
Cramps But No PeriodHaving cramps but no period can occur because of conditions other than your monthly menstrual cycle. They may feel like period cramps of the lower abdomen when you are not due for your period and produce no blood. These 12 diseases and conditions are examples of what can cause abdominal cramping when not on period.
Dehydration SlideshowDo you know the signs of dehydration? Dehydration can be mild or life-threatening. Learn causes, symptoms, treatments, and prevention tips to avoid dehydration.
ElectrolytesElectrolytes are substances that become ions in solution and acquire the capacity to conduct electricity. The balance of the electrolytes in our bodies is essential for normal function of our cells and our organs. Common electrolytes include sodium, potassium, chloride, and bicarbonate. The functions and normal range values for these electrolytes are important, and if an electrolyte is at an extreme low or high, it can be fatal.
Electromyogram (EMG)Electromyogram or EMG is defined as a test that records the electrical activity of muscles. Normal muscles produce a typical pattern of electrical current that is usually proportional to the level of muscle activity. Diseases of muscle and/or nerves can produce abnormal electromyogram patterns.
Hamstring Muscle PictureThe prominent tendons at the back of the knee. See a picture of Hamstring Muscle and learn more about the health topic.
How Long Does It Take for a Muscle Strain to Heal?A muscle strain occurs when muscle fibers are overstretched and tear. Learn more about muscle strains, how muscle strains happen, muscle strain symptoms, muscle strain diagnosis, and muscle strain treatment options. Receive information on how a sprain and strain differ.
HypothyroidismHypothyroidism is any state in which thyroid hormone production is below normal. Normally, the rate of thyroid hormone production is controlled by the brain by the pituitary gland. Hypothyroidism is a very common condition and the symptoms of hypothyroidism are often subtle but may include constipation, memory loss, hair loss, and depression. There are a variety of causes of hypothyroidism, and treatment depends on the cause.
Muscle Cramps (Charley Horse) and Muscle SpasmsWhat are the differences between muscle spasms and cramps? Learn about the causes of muscle spasms and cramps (charley horse) in the calf, leg, and more.
Muscle Cramps: Foods That Help and Prevent CrampingOne way to prevent muscle cramps is to get enough of these nutrients: potassium, sodium, calcium, and magnesium. They’re called electrolytes, and you can find them in these foods.
Muscle SpasmsMuscle spasms are involuntary muscle contractions that come on suddenly and are usually quite painful. Dehydration, doing strenuous exercise in a hot environment, prolonged muscle use, and certain diseases of the nervous system may cause muscle spasms. Symptoms and signs of a muscle spasm include an acute onset of pain and a possible bulge seen or felt beneath the skin where the muscle is located. Gently stretching the muscle usually resolves a muscle spasm.
What Are the Benefits of Massage, Traction and Manipulation?Various forms of massage, traction, and manipulation have been used as a part of medical practice to treat musculoskeletal conditions. A massage is a therapeutic manipulation of soft tissue of the body with the goal of achieving normalization of those tissues. A traction refers to the practice of slowly and gently pulling on a fractured or dislocated body part. It is usually performed by a medical professional such as a physiotherapist, using ropes, pulleys, and weights. Manipulation is the use of the hands by a professional on the patient by using specific techniques and maneuvers to achieve painless movement and right posture of the musculoskeletal system.
What Is a Compartment Pressure Measurement Test?A compartment pressure measurement test is a method to determine the pressure within the muscle compartment. It is done to diagnose compartment syndrome, which is a condition of increased pressure in the non-stretchable space containing nerves, blood vessels, and muscles. The compartment pressure measurement test is relatively safe. The risks, although low, include infection, pain, bruising, bleeding, nerve damage, muscle injury, development of acute compartment syndrome, and the need for urgent surgery.
When Should I Worry About Muscle Twitching?What is muscle twitching, and how do you recognize it? Muscle twitching is a common issue that affects many people. Learn the signs of muscle twitching, what causes it, when to see a doctor, and how to treat it. Sore muscles after exercise is normal and a sign your muscles are repairing themselves. Extreme soreness, however, may be a sign that you've overdone it. |
QuestionDownload Solution PDF
In an induction motor, under zero load conditions, the spin between the rotor and stator magnetic field is approximately _______.
Answer (Detailed Solution Below)
Detailed SolutionDownload Solution PDF
Principle of working of an Induction motor:
An induction motor always rotates with a speed slightly less than the synchronous speed.
The synchronous speed is the speed of rotation of the magnetic field in a rotary machine, and it depends upon the frequency and number of poles of the machine.
The rotating magnetic field produced in the stator will create flux in the rotor, hence causing the rotor to rotate. Due to the lag between the flux current in the rotor and the flux current in the stator, the rotor will never reach its rotating magnetic field speed (i.e. the synchronous speed).
The slip of an induction motor is given by:
Under zero load conditions, the speed of the motor is almost equal to synchronous speed.
Hence, the spin between the rotor and stator magnetic field is almost 1%.
Last updated on Dec 2, 2022 |
http://learnenglishteens.britishcouncil.org/skills/reading-skills-practiceRelated: Engelska • Reading comprehension • Reading • kajsi82 • Private classes
25 Reading Strategies That Work In Every Content Area 25 Reading Strategies That Work In Every Content Area Reading is reading. By understanding that letters make sounds, we can blend those sounds together to make whole sounds that symbolize meaning we can all exchange with one another. Without getting too Platonic about it all, reading doesn’t change simply because you’re reading a text from another content area. Only sometimes it does. Science content can often by full of jargon, research citations, and odd text features. Free Reading Worksheets Ereading Worksheets has the best reading worksheets on the internet, and they’re all free. These worksheets are skill focused and aligned to Common Core State Standards. You are free to save, edit, and print these worksheets for personal or classroom use. Many of these assignments can now be completed online. You’re going to like this. Fictional Passages
Community Club Home Community Club Firefighter Level A, Community Club What happens when the fire alarm rings? In a Heartbeat This lesson plan is designed around a short film titled In a Heartbeat and the theme of love. Students learn and practice expressions using the word “heart”, watch a short film trailer, predict and write a story, watch and discuss a short film, and watch and discuss a video in which elderly people give their reactions to the short film. Language level: Intermediate (B1) – Upper Intermediate (B2) Learner type: Teens and adults Time: 90 minutes
Giving your opinion Jack: Oh! Hi Gemma. How’s it going?Gemma: Oh. Hi Jack. How to Improve Your English Pronunciation to Talk Like a Native “What?” “Can you say that again?” How many times do you hear this when you’re speaking? Even if your vocabulary and English grammar are perfect, it can still be difficult for people to understand you because of your pronunciation. Learning to pronounce English words correctly can be one of the hardest parts of learning English.
STAAR Reading Test Passages Grade 2, 3, 4, 5, 6, 7 and 8 STAAR Reading Test Passages | Free Printable STAAR Reading Passages PDF Grade 2, 3, 4, 5, 6, 7 and 8 STAAR Critical Thinking READING TEST Passages | Inference, Main Idea, Authors Purpose, Sequencing, Summary, Character Traits, Fiction and Non-Fiction (nonfiction) | STAAR ONE PAGE Printable CRITICAL THINKING Reading PASSAGES ALIGNED TO the TEXAS STAAR Assessments and the COMMON CORE ELA Standards from Depaul University More Resources 2nd-3rd Grade Reading Leveled Vocabulary Tier 1, 2, and 3 The Reading Passages are aligned to CCSS and TEKS reading standards because they share the same goal of college and career readiness! Side by side studies showCCSS and TEKS share 70-90% of the same performance goals and reading objectives. CCSS and TEKS reading assessments use and share exactly the same Tier 1, 2, and 3 Academic Reading Vocabulary.
Reading Comprehension Worksheets and Printables: Fiction, Non-Fiction, Holida... abcteach features over 1,000 multi-page reading comprehension activities. These include biographies, history lessons, and introductions to important concepts in social studies, science, holidays, and more. Fictional stories are also available, providing students with fun and imaginative scenarios to explore. These stories serve as great backdrops for questions about problem solving, emotions, moral and ethical dilemmas, and vocabulary interpretation. Use the subcategories below to find reading activities written for your students’ comprehension level. ENGLISH 6 – Patricia Diaz Hem » LÄNKSAMLINGAR » ENGELSKA » ENGLISH 6 This is basically the level of the national exams in English 6 and what they look like: 1.
English C1-Oral Skills Monday, November 15th – Teaching/American Education System Useful Expressions for talking about Teaching(2) EXTRA MATERIAL Programme for International Student Assessment, also known as PISA. FINLAND Here [youtube= |
We often forget how important the overall structure of a sentence is to its flow, meaning, and tone. And we also take common grammatical practices for granted when we use parallel structure, because we typically use them with ease and without much intentional thought at all. However, when we get parallel structure in writing wrong, it goes really wrong and we typically never even notice it without the help of a reliable editor or proofreader.
What Is Parallel Structure?
Parallel structure in writing is also called “parallelism.” Here’s a definition of “parallel structure” provided by Purdue Online Writing Lab:
Parallel structure means using the same pattern of words to show that two or more ideas have the same level of importance. This can happen at the word, phrase, or clause level. The usual way to join parallel structures is with the use of coordinating conjunctions such as “and” and “or.”
Overall, parallel structure guarantees uniformity and consistency throughout a piece of writing, to ensure its clarity and accuracy. And by making each compared item or idea in a phrase or clause follow the same grammatical pattern, you create a parallel construction. |
- Scientists have described the newest crocodilian species known to science, the Hall’s New Guinea crocodile, previously considered a population of the already known New Guinea crocodile.
- The discovery was nearly 40 years in the making, sparked by the late herpetologist Philip Hall, who, in the 1980s, began questioning the differences between the southern and northern populations of crocodiles on the island of New Guinea.
- To describe the new species, named in honor of Hall, scientists studied and compared New Guinea crocodile skulls held at museums across the U.S.
- They also found some members of this new species hiding in plain sight: at an alligator farm in Florida that’s famous for having specimens of all known crocodilians.
The St. Augustine Alligator Farm Zoological Park in Florida prides itself on having every species of alligator and crocodile in the world. By 2019, scientists had recognized 26 crocodilian species worldwide, with the alligator farm housing live specimens of all 26. Yet, little did they know that a 27th species lurked in plain sight.
In September last year, scientists announced the discovery of that 27th crocodilian, a species native to New Guinea, the world’s second-largest island, and named it Hall’s New Guinea crocodile (Crocodylus halli). For nearly a century, the species was thought to be a population of the New Guinea crocodile (Crocodylus novaeguineae). Published in Copeia, the discovery was nearly 40 years in the making.
Philip Hall, a late scientist at the University of Florida, first speculated in the 1980s that the New Guinea crocodile might actually be two species. In particular, Hall noted that the New Guinea crocodiles on the northern half of the island mated and nested differently from the crocodiles on the southern half. New Guinea is an island sprawling with unique landscapes and unmatched biodiversity, such as birds-of-paradise and tree kangaroos. The country of Indonesia claims the western half of the island, while Papua New Guinea claims the east. The New Guinea highlands divide the island laterally, creating distinct ecosystems in the northern and southern halves of the island.
When Hall passed away, there was still no official conclusion on the existence of a second species.
In 2014, Chris Murray, an assistant professor at Southeastern Louisiana University, and Caleb McMahan, a scientist with the Field Museum in Chicago, teamed up to tackle Hall’s unfinished work.
While Hall’s research focused on the differences between the northern and southern crocodiles’ behavior, Murray and McMahan took a different approach. They studied 51 New Guinea crocodile skulls kept at museums around the U.S. The researchers intensively analyzed the structural differences between crocodiles from the northern and southern halves of the island.
“We used a tool called geometric morphometrics to look at structural differences of the skulls between the two species,” Murray said. “The visible differences are that Novaeguinea [the New Guinea Crocodile] has a more narrow and longer skull, while Halli [Hall’s New Guinea crocodile] has a shorter and wider skull.”
In describing the new species, Crocodylus halli, Murray and McMahan felt it was best to name the crocodile after the scientist whose work they sought to finish.
After publishing their groundbreaking discovery, Murray and McMahan wanted to see the reptile in real life next. However, the species’ native island of New Guinea was a little out of the way for them, located more than 13,000 kilometers (8,000 miles) from the U.S and around 160 km (100 mi) north of the Australian mainland.
The scientists decided on a closer, cheaper, and more convenient location that might just have the species without even knowing it.
That location was none other than the St. Augustine Alligator Farm Zoological Park in Florida. “The Alligator Farm is known for having every crocodile species in the world,” Murray said. “So, if anyone was going to have them, it was going to be them.”
His hunch was right. When he and McMahan made the trip to St. Augustine, they were pleased to see the newly described crocodile species in the flesh.
“It was really exciting for Caleb and I to see that some of the crocodiles were obviously Halli, as we could clearly see the differences in the skulls that they had,” Murray said.
Of course, it’s much easier for someone to identify the minute differences between crocodile skulls if they’ve been studying them for years. John Brueggen, the manager of the St. Augustine Alligator Farm, noted how Hall’s New Guinea crocodile “is not drastically different looking than the animals on the other side of the mountain range in New Guinea, which is why it has taken so long to distinguish it from the other.”
The process to discover Hall’s New Guinea crocodile may have taken nearly four decades, but the work is not over yet. This discovery raises important questions about the actual conservation statuses of the New Guinea crocodile and Hall’s New Guinea crocodile, especially since the IUCN Red List previously listed the New Guinea crocodile as being of least concern.
“We now have two species instead of one,” McMahan said. “If habitats or stressors are different in northern and southern portions of the island, these species could be impacted in different ways.”
In general, wildlife in New Guinea suffer from a variety of human-caused threats, including the logging and mining industries, the destruction of habitats for agricultural plantations (such as for palm oil), and the introduction of non-native wildlife.
Since the publication of Murray and McMahan’s paper last year, the IUCN Crocodile Specialist Group has been reassessing the two New Guinea-based crocodile species and their unique habitats more closely.
“Understanding differences can tell us more than just how to tell the two species apart,” McMahan said. “It gives us important information that can lead to new questions and hypotheses about the evolution and ecology of these crocodiles.”
Murray, C. M., Russo, P., Zorrilla, A., & McMahan, C. D. (2019). Divergent morphology among populations of the New Guinea crocodile, Crocodylus novaeguineae (Schmidt, 1928): Diagnosis of an independent lineage and description of a new species. Copeia, 107(3), 517-523. doi:10.1643/CG-19-240 |
Rising from the ashes of a broken, war-torn world, the United Nations represented a reach towards a new direction in international politics: the preservation of peace, security and human rights.
New Year’s Day in 1942 was not your typical start to a new year. 26 States, amongst them, the USA, Britain, China and the USSR, pledged their commitment to fight the Axis powers (i.e Germany, Italy and Japan). They did so by signing the Declaration by United Nations. The first clause of the Declaration states that it enshrines the principles and purposes set out in the Atlantic Charter (1941), which was a joint declaration made between the USA and Great Britain “of certain common principles in the national policies of their respective countries on which they based their hopes for a better future for the world.”
On 24 October 1945, 51 States met in San Francisco and ratified the United Nations Charter, and the United Nations officially came into existence. The United Nations Charter expands on the principles and purposes of the Atlantic Charter. In 1945, the world was in disarray following the end of the Second World War. Indeed, in the preamble of the Charter, it explicitly notes the need to “save succeeding generations from the scourge of war, which twice in our lifetime has brought untold sorrow to mankind”. The basis of preserving peace and security was similar to its forerunner, the League of Nations which was founded in 1919, following the end of the First World War. The League of Nations dissolved as it failed to prevent the Second World War. In contrast, the UN has had significantly greater success and longevity. |
On each slide, formulate the answer in your head, and then click the button to see if you are correct. Answer? Answer? Summer Answer? Sea Breeze Answer? A. The sea cools more quickly than the atmosphere or the land. B. The sea heats more quickly than the sea or land. C. The land heats more quickly than the atmosphere or sea. D. The atmosphere heats more quickly than the sea or the land. A. Warm air sinks, cold air rises B. Warm air rises, cold air sinks. C. Warm air sinks, cold air sinks. D. Warm air rises, cold air rises. Answer? A. Eastern Hemisphere C. The North Pole B. Murrieta D. The Equator Answer? Answer? Differences between the heating of Earth in different places. Answer? From areas of high pressure to areas of low pressure. A. water heats up faster than land. B. land heats up faster than water. C. air heats up faster than land. D. land heats up faster than air. Answer? Answer? Winds that blow steadily in predictable directions over long distances. YOU are responsible for your own learning; keep reviewing until you can answer and understand all questions. Or press Esc to end…. |
The concept behind the urban farm
Population growth, concentration in the urban environment, the effects of climate change on crops, food availability… All these elements bring agricultural stakeholders to work together to find viable solutions to food needs of the planet. For while food production does not seem to be the problem in the first place, it is access to so-called “local” food that causes headaches for both urban and remote populations.
How can we feed these populations with fresh produce without costing a lot of money for production, but especially for transport? And who says transport says impacts on the environment. The degradation of ecosystems will put more pressure on farmers, hence the need to find alternative solutions.
An increasingly widespread phenomenon that has recently been found in Asia, North America and Europe are vertical farms in urban areas. These ultramodern facilities, often located in the middle of the city, are now considered the precursors of an agricultural revolution. Most of the real estate projects in urban areas now include a green space, while large shopping centers are developing their vegetable gardens. The actors involved have understood: the modern city no longer thinks without its components “urban agriculture, sustainable food, mitigation and adaptation to climate change.”
The development of urban agriculture is accompanied by that of above-ground production techniques: hydroponics, aeroponics, aquaponics. Because they provide the opportunity to install a controlled environment for an above-ground production system, vacant buildings can be converted into an installation that can hold hundreds, if not thousands of serving devices. either cuttings or growth. With 90% less water requirements than a land crop and a fraction in terms of fertilizer and no pesticide use, the urban farm is an alternative that provides a ready-to-eat product easily accessible to the population. . |
Tested tips for learning to remember new words and phrases in any new language.
You must have wondered how some people learn foreign languages so quickly. It is very nice and useful to learn a new language as it opens many new possibilities. However, many people have difficulties in learning new words, especially in a foreign language.
Remembering these words is even more difficult. Acquired vocabulary is practically useless unless the words learned can be recalled and used. Here are some suggestions for learning new foreign language words and retaining them. These suggestions work for any language.
You have to learn to recall words, phrases and structure as well as ascribe meaning before you can become skilful at reproducing language like other users. To do this you need to provide a label, function, association, similarity, difference and multiple meaning for vocabulary words.
Research in how people learn languages, has repeatedly proved the following to be true:
- You’ll remember something you’ve discovered on your own!
- Now, how are you going to make that happen?
- You can use four basic techniques to increase your vocabulary retention.
Discover Significance of New Words
As you see or hear a new word or phrase, imagine it as a challenge, a mystery waiting to be solved. So before running to the dictionary to check the meaning of the exact phrase in your mother tongue
- Try to discover its meaning from the context – e.g. Underhand. The words Under and Hand suggest something to you. In this way you can work out the meaning of the whole term. But remember to check.
- Try to guess its meaning from the structure of the word – it may contain familiar elements – e.g. Take the word transnational. If you find out the meaning of the parts national and trans, theycould give you some idea what the word could mean.
- Try to discover the origin of the word – knowing how the word came into usage can be very helpful for remembering itA little warning here – as soon as you have a theory as to what some particular word, phrase or expression might mean, check from the dictionary or a teacher to make sure you get the right meaning. Phrasal verbs in English are very treacherous and could lead you off into a totally wrong track, so always double-check them.
Make Associations: Relate New Information to Material Already Learned
When you come across a new word while reading, listening or work activities don’t rush straight for the dictionary. If you look up a word from the dictionary and even understand it correctly the information goes to the short-term memory area in your brain. This area is what the name says, short term. If you try to recall that word after a few days, there is a high probability that you cannot remember it. The aim in learning vocabulary is to connect this new word or phrase to your long-term memory. This is best achieved through tiny hooks called associations.
Take the following example. Many people have problems remembering the difference between “borrow” and “lend”. Make a simple association – if you lend money to someone, it is the “end” of your money, as you’ll never see it again. Thus “lend” leads to “end”. If you have created a strong and unique association special for you, there is a very good chance that you will never forget this new word or phrase.
Make Word Lists
Many people have used this technique successfully to increase vocabulary retention and learn new words and phrases.
Compile lists of new words as follows:
- Divide a page into two columns and write the foreign language word on one side with a corresponding word in your mother tongue on the other column.
- Start a new sheet for each topic area e.g. One sheet for vocabulary related to sports, another for economy etc.
- Use color (highlight pens) in your lists: e.g. All verbs in red, nouns in blue, adjectives in green etc., so that when you think of the word later you will remember its colour and this will help you use it correctly. Don’t make a mile long list, but a fairly short one. Then go to the next activity.
Word List Activity
Cover up one column on your list and work your way down testing yourself, first from the foreign language to the definition or the equivalent in your mother tongue and then reverse the process. You can have fun by working with a friend to test each other.
Take words from your list and write 3 different sentences in the target language using each word to illustrate its meaning. Make them humorous or even outright silly if possible. Then read them aloud to get the feel.
Remember a very important rule:
You’ll remember words better from the context you use them in!
If you have a memory association for that particular word, the better are your chances for remembering it through that memory.
After using these techniques, you will start noticing improvement in your vocabulary retention in a matter of weeks. I learnt one of the most challenging languages, Finnish and a very delightful language, Italian in a few months using the above methods. |
We’re pretty lucky to call Earth our home, what with its vast oceans, lush forests, and majestic mountains. It’s important to teach our children the importance of preserving our earth not only for us but for the next generation. In my opinion, it’s never too late to instill these values in our little ones. Earth Day is a great opportunity for parents, teachers, and children to learn about important challenges facing the planet today.
There are countless activities for kids that are fun and easy to do at home. Most just require a few household items to do, and none of the ones I’ve listed require any fancy lesson plan or complicated science experiment. These are a few fun activities and crafts that I’ve come across that are both a great way to learn about our earth and also teach our kids everyday skills.
Table of Contents
- 5 Earth Day Activities for Preschoolers
- 5 Earth Day Activities for Kindergarten
- 5 Earth Day Activities for School-Aged Kids
- Most Important of All, Make It Fun!
5 Earth Day Activities for Preschoolers
Preschool isn’t too early to teach kids about Earth Day. Rather, it’s important to start early with introducing your little ones to the idea of keeping our earth clean. At this age, children are still developing their fine motor skills and exploring the world through sensation. Here are a few ideas your little ones will love for their Earth Day activities, from sensory bins to sorting practice.
#1. Grow Your Own Homemade Seed Paper
This activity is both a fun and interesting process for children, but it also teaches children about recycling. You’ll be able to celebrate Earth Day by teaching your kids about the importance of recycling and how to grow seeds, while also giving them a chance to improve some fine motor skills when they rip up or cut the scraps of newspaper for their “seed paper.”
Items you’ll need for this project include seeds, newspaper, water, and a blender. Homemade seed paper gets kids interested in growing plants or flowers from just a few tiny seeds, while using recycled newspaper that would’ve otherwise been thrown out. It’s great for a fun-filled kid’s Earth Day, but also a wonderful idea to do with your students or kids on any sunny spring day.
#2. Teach Them About Ocean Pollution With Sensory Bins
Pollution can seem like such a daunting topic to tackle, even for adults. So how do we teach our kids about protecting the earth from pollution at such a young age? When it comes to teaching our children about the devastating effects of oil, plastic, and garbage pollution in our oceans, sensory bins are a great way to get our kids hands-on and learning. Asa a mom, this is one I’ve been able to enjoy with my own daughter – she’s still too young for many types of sensory bins, but this water-themed sensory bin has been a hit!
You’ll need a few items, including a large plastic bin or water sensory table, animal and boat toys, cocoa powder, vegetable oil, coffee grounds, random scraps of make-believe trash, and a small scrub brush — all things that, luckily, can be found right in our homes. Essentially, you start with a large bin of clean water and slowly add the oil or coffee grounds, thereby “polluting” the water. This earth day activity shows children the effects of pollution on the ocean and its animals, and it is a wonderful way to teach our children the importance of keeping our earth and ocean a clean place.
#3. Count All The Trees, Birds, and More
This next idea is an easier way to celebrate Earth Day with preschoolers and needs no extra materials to take part in. It just involves going to your backyard, neighborhood park, or wherever is close by and allowing your kids to soak in their surroundings. One benefit of this activity is that you don’t need anything extra to do it (always a plus!), and it also gets preschoolers to take notice of nature in their own surroundings.
Ask them to count the number of trees they can see. Have them practice making a list of different types of animals. Ask them what colors in nature they can identify. Have them show you all the different types of birds they can find. Count the squirrels they see. Identify different shapes of clouds. The possibilities are truly endless!
#4. Clean Up the Neighborhood or Park
This Earth Day activity is one that I remember doing with my mom as a child. Taking your child out and to the park, neighborhood, or schoolyard is a fun, interactive way to get your child excited about Earth Day and also teaches them the significance of littering and maintaining a clean environment.
All you need to do is take your kids outside, bring a bag and pick-up stick or another garbage-pick-up device (you could also just wear gloves to keep your hands clean), and walk around your area to pick up any trash you see. Your kids will probably comment on how much trash they’re finding. This is an important opportunity to start a conversation with them about littering!
#5. Make a Sensory Bin Recycling and Sorting Game
Here’s another earth day activity that’s great for teaching kids the importance of recycling and its impact on our environment. These recycling sensory bins are very simple to make, consisting of just a large bin and placing household trash or recyclables in it (you don’t necessarily have to use real trash, you could use sponges, old plastic bottles, empty jam jars, empty yogurt cups, etc.).
The benefit of this activity is that it shows kids how important it is to properly recycle and sort through different types of recycling. After your kids are done sorting all the recyclables into organized piles, a trip to the local recycling center, or even down to your own recycling bin, can show kids where these objects end up. Take this opportunity to describe what recycling is with kids, and how it can be re-used in other ways to minimize waste.
5 Earth Day Activities for Kindergarten
Many of these Earth Day activities for kids in kindergarten are easy to do at home or in the classroom. At this age, kids love coloring with crayons, learning from flashcards, and practicing many fine motor skills. The following Earth Day activities are just a few easy ways to celebrate Earth Day with your kindergartener.
#1. Melt Your Own Homemade Crayons
Children love crayons and most children love science. So, making DIY Crayons with kids is a sure win. Not only do kids love homemade crayons and all the fun shapes they could be, but they also enjoy the process of tearing off the crayon’s wrapper and especially enjoy the process of breaking the crayons.
Of the other Earth Day activities listed, this is one of the more practical ideas. Most kids have a bunch of old, broken, or too-short crayons laying around. This is a great chance to take these old, discard crayons and melt them down into a brand new, multi-colored crayon they will get new use out of to celebrate Earth Day and recycling.
#2. Design a Recycling Sorter Game
This next Earth Day idea is great to hone some of those important skills all kids need. All you need are some cups or small containers, a few random small objects that are of different colors, and maybe a pair of tweezers, although this last item is not strictly necessary. The name says it all – your child will have a blast “recycling” and sorting different objects.
This would be a great activity to practice counting, sorting by color, shape, or size, and also allows kids to practice those fine motor skills if they’re using the tweezers to sort the objects.
#3. Print Your Own Earth Day Flash Cards
These Earth Day-themed printable flash cards are a fun way to get your kids to learn about Earth Day. These cards will get them practicing counting, tracing, labeling, making a list, and more. All you need to do is print them out and cut them to size.
What you’ll find in this mini-pack (linked above) are numerous educational Earth Day-themed activities. They’ll get to practice counting by counting the number of items at the top of a card and circling the correct number below. They’ll practice letter recognition by searching for the letters E and e in a grid and circling them. There are many more fun lessons in this Earth Day flash cards pack for your child to take part in. The best part is that this mini-pack is free!
#4. Sing Earth Day Songs
Earth Day songs are hard to find, so here are a few songs to celebrate the day. Get creative with and combine with other Earth Day activities, or get kids up to move and dance.
#5. Create a Recycled Planter on a String
While it’s important to teach our kids the importance of using sustainable products such as reusable water bottles, sometimes they just show up around the house. On earth day you can teach kids that there are many ways to re-use these plastic bottles, and using these containers as planters is a great way to reduce the amount of waste in trash bins, and it’s also a way to have kids grow some things of their own from seeds. These planters are small and portable, and they can be hung anywhere there’s sunlight. Your kids will enjoy checking on their homegrown plants day after day.
5 Earth Day Activities for School-Aged Kids
When kids reach school-age, they’re aware of much more than we realize. They love new and interesting ways to get hands-on, whether that involves building something from scratch, taking something apart, or making something special for a loved one. I found most of these ideas on Tinker Lab, and there are many more out there.
#1. Make a Peanut Butter and Pine Cone Bird Feeder
This was one of my favorite things to do (not only on Earth Day!) as a child. All you need is a pine cone that you find in your backyard, the park, etc., peanut butter, and some seeds. Kids will have a great time searching for the perfect bunch of pine cones for their bird feeders. The rest is self-explanatory: slather some smooth peanut butter all over the pine cone and sprinkle your birdseed in a flat, uniform layer on a large baking tray. Then, have your kids dip and roll the pine cone in the birdseed. This part can get messy, but kids absolutely love this! Once you hang up your pine cones, your kids will have a great time watching for the different types of birds that visit your homemade bird feeders.
#2. Design a Nature Diorama
This is one of those Earth Day activities that is still one of those classic annual school projects that kids adore, where they can dream up an imaginary miniature world filled with tiny characters and décor, all contained in a little box.
All you’ll need is an old shoebox or tissue box (once again, showing how many ways you can re-use what was once thought of as trash!) and perhaps some glue. The rest is up to your kids. Have them go to the park, backyard, or wherever you want and scavenge for things in nature they think would be perfect in their own diorama. These dioramas let your kids’ imaginations run wild, and you’ll love seeing the different creations they come up with.
# 3. Plant a Garden
Here’s another fond memory from my childhood that I hope to recreate with my daughter. My mom was always an avid gardener, and I’ve inherited her green thumb (or at least I like to think I have!). Carve out a little patch of dirt in your backyard or front yard and let your kids pick out what they would like to plant. It could be vegetable or fruit seeds, shrubs, flowers, trees– the list goes on. They’ll love getting down on the ground and digging in the dirt with their own shovels. Teach them how to properly sow seeds and instill in them the discipline for watering and nurturing their own plants each day. If you don’t have your own backyard or front yard, there are many windowsill herbs you can grow!
# 4. Upcycled Tin Can Drums
These upcycled tin can drums are another great way to show kids how they can get creative and reuse what they previously thought was trash. In this earth day activity, take old coffee tins or hot cocoa tins and fill them with anything you want. It could be buttons, marbles, paper clips, etc. Anything that will make a noise when placed inside the tin can! This is a great sensory game as well. Kids can learn that different objects make different noises when they bang against the tin can walls.
You could also incorporate these upcycled tin can drums into music time, and use them as shakers or drums to get them excited about dancing during music time. It’s a great way to get kids up and moving and shaking!
# 5. Nature Journals
This is an Earth Day activity that I also did as a child. I loved journaling during my middle school years, and the idea of a nature journal is a great way to get your kids interested in both journaling and nature at the same time. Get them their very own notebook or journal in which they can write down any idea, observation, or wish they want.
Since it’s a nature journal, have them go out daily and find something that inspires them — flowers, grass, different shaped leaves — and press these findings within the pages of the journal. Once that’s done, they can then write down poems, thoughts, or feelings they have about their day or these things they found outdoors. It’s a simple idea that really benefits kids’ creative writing skills!
Most Important of All, Make It Fun!
These are just a small handful of fun Earth Day activities I plan on doing with my daughter this year, and a few saved for when she is older. The best thing about these activities is, not only are kids exploring new creative outlets, but they are also learning about the earth and how important it is to maintain our environment. So get creative, have fun, make a mess, and get outdoors with these Earth Day activities for kids. When kids are having fun, they’re more likely to engage, interact, and make happy memories that last a lifetime. I still remember many of the Earth Day activities I did with my family or in school, and I hope that these activities will leave a lasting impression on your little ones as well! |
Face masks that cover the nose and mouth are required in public settings throughout California because they are very effective at curbing the spread of the virus that causes COVID-19.
But there are some people who cannot tolerate or are unable to wear masks.
Children under age 2 should not wear masks, and even slightly older kids may have difficulty wearing them appropriately, according to the Centers for Disease Control and Prevention.
People with breathing difficulties also need to be cautious about face coverings, says Shruti Gohil, MD, MPH, a UCI Health infectious disease expert and associate medical director of epidemiology and infection prevention.
“Some people with certain respiratory illnesses may become short of breath or lightheaded when wearing masks, Gohil explains.
“If this happens they, you should discuss your options with your doctor because you may be at higher risk for complications if you get COVID-19. Finding ways to prevent infection is especially important for people with respiratory conditions.”
No matter how well a person tolerates wearing a mask, it’s best to do everything possible to lower your risk. And that means covering your nose and mouth, washing your hands frequently and keeping at least six feet of distance from others.
“Even if some of these people cannot tolerate a tight-fitting mask, they might well tolerate some kind of face covering, like a scarf,” she says. “While that’s not as good as a mask, your goal is to shave off as much risk of exposure to the virus as you can. So find something that works best for you.”
Additional CDC guidance
The CDC notes that masks may be inappropriate in certain other circumstances, such as:
- During high-intensity activities like running where a mask may hinder breathing.
- In work situations where masks may contribute to heat-related illness or get caught in machinery.
- While swimming when a wet face covering makes breathing difficult
- For anyone who is unconscious, incapacitated or unable to remove a cloth face covering without assistance
- For people with intellectual or developmental disabilities, mental health conditions or other sensitivities who may find wearing masks challenging
Extra caution required
In all of these cases, Gohil says, people who forego masks need to be extra vigilant.
Here are some dos and don’ts when masks aren’t an option:
- Avoid indoor public spaces and crowds if you have a medical or psychiatric condition that makes it impossible to wear a face covering.
- Be careful to maintain at a least a six-foot distance in all situations, whether in a store, while swimming or walking outside.
- Exercise outdoors where it’s well ventilated and easier to maintain a safe distance from others.
- Prioritize wearing face coverings you can tolerate when social distancing isn’t possible, such as during carpool drop-offs, standing in lines, at work meetings or when traveling in a group.
- Use written means to communicate with people who are deaf or hard of hearing, or choose a mask with clear area over the mouth to aid lip-reading.
- Wash your hands often and avoid touching your eyes, nose and mouth — or your infant and children — when out and about.
‘The onus is on us’
“Because some people can’t tolerate wearing a mask, that really puts the onus on the rest us to wear face coverings,” says Gohil.
“When you look at it from a public health standpoint, if everybody does their part, we can stop the spread of this virus.” |
Scientific names: Citrus Longhorned Beetle (Anoplophora chinensis), Asian Longhorned Beetle (Anoplophora glabripennis ), and Red-necked Longhorned Beetle (Aromia bungii)
What Are They?
Citrus (Anoplophora chinenses), Asian (Anoplophora glabripennis), and red-necked (Aromia bungii) long-horned beetles are large beetles whose larvae feed on and in the wood of trees. When the beetles mature to adulthood, they emerge through holes that weaken the trees further. They are extremely destructive to hardwood trees.
Are They Here Yet?
No. All three species have reached Washington at least once in warehouses or nurseries, but these were isolated incidents. The beetles came in with foreign nursery stock, which at the time was not regulated for these particular pests. With the increase of global trade and movement of plant materials via the Internet, the state still is at risk for new introductions of any of these species.
Why Should I Care?
Unlike our native long-horned beetles that typically feed on dying trees, invasive long-horned beetles attack healthy trees, sometimes killing them. These beetles can harm more than 40 species of host trees. Letting these tree-killing beetles establish in Washington would be devastating to forests, park, and yards.
What Are Their Characteristics?
Asian Long-horned Beetle
- Large, robust beetle
- Glossy black with irregular splotches of white on the wings.
- The antennae are quite striking with bands of black and gray.
- The feet and legs are decorated with a slate blue pubescence.
Citrus Long-horned Beetle
- Very similar in appearance to the Asian long-horned beetle because it is closely related. It is large, stout, and about 1-1 1/2 inches long with shiny black wings marked with 10-12 white round dots.
- Males are generally smaller than females, and have their abdomen tip entirely covered by the wings. The females’ abdomens are partially exposed.
- The males’ antennae are longer than the females’ in comparison to their body size.
Red-necked Long-horned Beetle
- 4/5-1 1/2 inches long.
- Body is almost entirely a glossy black except for a red thorax, between its head and abdomen.
- The female’s antennae are as long as its body, while the male’s antennae are about 1 1/2 times as long.
How Do I Distinguish Them From Native Species?
The Asian and citrus long-horned beetles have a few native lookalikes, including the banded alder borer (Rosalia funebris) and several species in the genus Monochamus. Monochamus species may be differentiated by the smaller size of their white spots, small white triangle marking on their upper backs, and the visibly rougher, bumpier, and less glossy texture of their exoskeletons. Please visit these links to help with identification:
- Iowa State University’s BugGuide: Banded alder borer, spotted pine sawyer, and Oregon fir sawyer/ Whitespotted sawyer
- Washington State University Extension
- University of Vermont |
Types of Stones
Different stones are made of different materials and form under different circumstances. Generally, stones fall into calcium (oxalate/phosphate), struvite, uric acid, and cystine.
Calcium stones, as the name implies, are created by an abundance of calcium. Generally, dietary consumption of calcium can play a role, but there are other factors. Calcium stones are either calcium oxalate (the vast majority) or calcium phosphate, depending on the pH balance of the blood.
Struvite stones are caused by infections, typically of the urinary tract. As women are more prone to UTIs, they, unfortunately, suffer the bulk of struvite stone cases. Diet does not directly affect these stones, though eating food that limits infections may indirectly decrease the occurrence of these stones.
Uric acid stones are formed from uric acid crystals; generally, these stones occur in situations where there is an abundance of purines in the diet, as well as animal-based protein, particularly organ meats and shellfish.
Cystine stones are among the rarest kidney stone varieties; this is because they result from a specific genetic disorder. The disorder causes cystine, an amino acid, to leak from the kidneys, where it may clump together to form stones.
Causes of Kidney Stones
By and large, insufficient water intake is the main contributor to kidney stones of all varieties. This is because regardless of anything else, water lubricates the digestive system and dilutes deposits, and a lack of water makes urine more saturated with deposits. With that out of the way, other factors include diets oversaturated with various vitamins and minerals, such as calcium and vitamin D, purines (contributing to hyperuricemia), or dysfunction of certain glands or organs. Additionally, chemotherapy may create kidney stones as a side effect. Different kidney stones have different causes and contributing factors.
Symptoms of a Kidney Stone
As you likely know, the primary symptom of kidney stones is an immense constant pain in the sides, abdomen, and lower back. Aside from that, kidney stones sometimes move, which can lead to urinary infections that may cause painful urination. Oftentimes, this urination will be pink, red, or brown in color. Some lesser known symptoms include fever, chills, restlessness, and nausea, all of which are associated with numerous other conditions; these can make it difficult to diagnose kidney stones properly.
Treatment Options for Kidney Stones
If you experience these symptoms, it’s important to see a doctor; Your doctor will be able to diagnose the kidney stone and talk about treatment options. For smaller stones, it’s not uncommon to simply encourage their passage by increasing water intake and waiting; typically, in these circumstances, the stones will pass in a matter of days. Otherwise, treatments may involve medication, various other procedures, or surgery. If you’re not keen on potentially expensive medication or medical procedures, there are some natural treatments that may be able to help.
Ultimately, when it comes to clearing up kidney stones, citric acid is key. Citric acid, or more specifically citrates, have been found to break down calcium oxalate, the vast majority of kidney stones. This means if you are looking to break down smaller stones, you should increase your intake of citrus fruits.Related: 9 Habits That Hurt Your Kidneys |
|1932 El Rodeo, Labor Day Description|
The meanings of the name, and observances of, “Labor Day” differ greatly between the U. S. and the UC campus at Davis. In the larger U.S., Labor Day is, quoting Wikipedia, “a public holiday [that] honors the American labor movement and the contributions workers have made to the strength, prosperity, laws and well-being of the country.” Unions began to promote such a day in the late 19th century and, in 1887, Oregon was the first state to make it an official holiday. “By the time it became an official federal holiday in 1894, thirty U.S. states officially celebrated Labor Day.”
A curious and interesting aspect is that, starting in 1897, the same name was used at the University of California (then meaning Berkeley) to label a campus-wide day of voluntary labor devoted to campus improvement projects (Ann Scheuring, Abundant Harvest, p. 35).
Given what we know about the intensity of public sentiments for and against labor unions, it is not a stretch to guess that Berkeley’s Labor Day was a sideways negative comment on the larger and “real” Labor Day. |
'Godzilla' planet: How big can a rocky planet get?
A newly discovered rocky planet, 17 times the mass of ours, has been called the 'Godzilla of Earths.'
Boston — Scientists have just discovered the "Godzilla of Earths" — a new type of huge and rocky alien world about 560 light-years from Earth.
Dubbed a "mega-Earth," the exoplanet Kepler-10c weighs 17 times as much as Earth and it circles a sunlike star in the constellation Draco. The mega-Earth is rocky and also bigger than "super-Earths," which are a class of planets that are slightly bigger than Earth.
Theorists weren't actually sure that a world like the newfound exoplanet could exist. Scientists thought that planets of Kepler-10c's size would be gaseous, collecting hydrogen as they grew and turning into Jupiter-like worlds. However, researchers have now found that the newly discovered planet is rocky, Christine Pulliam, a spokeswoman with the Harvard-Smithsonian Center for Astrophysics, wrote in a statement announcing the find. [The Strangest Alien Planets Ever Found (Gallery)]
"This is the Godzilla of Earths!" the CfA's Dimitar Sasselov, director of the Harvard Origins of Life Initiative, said of Kepler-10c in a statement. "But unlike the movie monster, Kepler-10c has positive implications for life."
The discovery of Kepler-10c was presented today here at the 224th American Astronomical Society meeting.
The mega-Earth orbits its parent star once every 45 days. Kepler-10c is probably too close to its star to be hospitable to life, and it isn't the only orbiting the yellow star. Kepler-10 also plays host to a "lava world" called Kepler-10b that is three times the mass of Earth and speeds around its star in a 20-hour orbit.
NASA's Kepler space telescope first spotted Kepler-10c, however, the exoplanet-hunting tool is not able to tell whether an alien world it finds is gaseous or rocky. The new planet's size initially signaled that it fell into the "mini-Neptune" category, meaning it would have a thick envelope of gas covering the planet.
CfA astronomer Xavier Dumusque and his team used the HARPS-North instrument on the Telescopio Nazionale Galileo in the Canary Islands to measure Kepler-10c's mass. They found that the planet is, in fact, rocky and not a mini-Neptune.
"Kepler-10c didn't lose its atmosphere over time. It's massive enough to have held onto one if it ever had it," Dumusque said in a statement. "It must have formed the way we see it now."
Scientists think the Kepler-10c system is actually quite old, forming less than 3 billion years after the Big Bang. The system's early formation suggests that, although the materials were scarce, there were enough heavy elements like silicon and iron to form rocky worlds relatively early on in the history of the universe, according to the CfA.
"Finding Kepler-10c tells us that rocky planets could form much earlier than we thought," Sasselov said in a statement. "And if you can make rocks, you can make life."
The new finding bolsters the idea that old stars could host rocky Earths, giving astronomers a wider range of stars that may support Earth-like alien worlds to study, according to the CfA. Instead of ruling out old stars when searching for Earth-like planets, they might actually be worth a second look.
It's also possible that exoplanet hunters will find more mega-Earths as they continue searching the universe. CfA astronomer Lars A. Buchhave "found a correlation between the period of a planet (how long it takes to orbit its star) and the size at which a planet transitions from rocky to gaseous," meaning that scientists could find more Kepler-10c-like planets as they look to longer period orbits, according to the CfA.
- The Search For Another Earth | Video
- Alien Planet Quiz: Are You an Exoplanet Expert?
- 7 Greatest Alien Planet Discoveries by NASA's Kepler Spacecraft (So Far)
Copyright 2014 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. |
- What is dimensional formula?
- What is the formula for current?
- What is K value?
- What is the value of constant k?
- What is the Boltzmann constant in eV?
- What is dimensional formula of surface tension?
- What is dimensions and dimensional formula?
- Why is entropy J K?
- What is dimensional formula of resistance?
- What are the 7 fundamental dimensions?
- Where is Boltzmann constant used?
- What is dimensional formula of Boltzmann constant?
- What is K in Boltzmann’s formula?
- What is the dimensional formula of charge?
- What is the SI unit of charge?
- What is dimensional formula of time?
- What is r in pV nRT?
- What is the significance of Boltzmann constant?
What is dimensional formula?
The dimensional formula is a compound expression showing how and which of the fundamental quantities are involved in making that physical quantity.
The dimensional equation of a physical quantity is an equation, equating the physical quantity with its dimensional formula..
What is the formula for current?
Current is usually denoted by the symbol I. Ohm’s law relates the current flowing through a conductor to the voltage V and resistance R; that is, V = IR. An alternative statement of Ohm’s law is I = V/R.
What is K value?
(N·s2/C2)c2. The Coulomb constant, the electric force constant, or the electrostatic constant (denoted ke, k or K) is a proportionality constant in electrostatics equations. In SI units it is equal to 8.9875517923(14)×109 kg⋅m3⋅s−2⋅C−2.
What is the value of constant k?
The symbol k is a proportionality constant known as the Coulomb’s law constant. The value of this constant is dependent upon the medium that the charged objects are immersed in. In the case of air, the value is approximately 9.0 x 109 N • m2 / C2.
What is the Boltzmann constant in eV?
Click symbol for equationBoltzmann constant in eV/KNumerical value8.617 333 262… x 10-5 eV K-1Standard uncertainty(exact)Relative standard uncertainty(exact)9 more rows
What is dimensional formula of surface tension?
Surface Tension (T) = Force × Length-1 . . . . . ( 1) Since, Force = Mass × Acceleration. And, acceleration = velocity × time-1 = [L T-2] ∴ The dimensional formula of force = M1 L1 T-2 . . . . (
What is dimensions and dimensional formula?
DIMENSIONS are the powers to which the fundamental quantities are raised to represent other physical quantities. DIMENSIONAL FORMULA is an expression in which dimensions of a physical quantity is represented in terms of fundamental quantities.
Why is entropy J K?
Thus, under appropriate conditions and definitions, the change in entropy is the amount of heat transferred divided by the temperature. … Thus, under appropriate conditions and definitions, the change in entropy is the amount of heat transferred divided by the temperature. Thus it has the units of J K-1.
What is dimensional formula of resistance?
Therefore, resistance is dimensionally represented as M L2 T-3 I-2.
What are the 7 fundamental dimensions?
In total, there are seven primary dimensions. Primary (sometimes called basic) dimensions are defined as independent or fundamental dimensions, from which other dimensions can be obtained. The primary dimensions are: mass, length, time, temperature, electric current, amount of light, and amount of matter.
Where is Boltzmann constant used?
In classical statistical mechanics, Boltzmann Constant is used to expressing the equipartition of the energy of an atom. It is used to express Boltzmann factor. It plays a major role in the statistical definition of entropy. In semiconductor physics, it is used to express thermal voltage.
What is dimensional formula of Boltzmann constant?
Dimension Formula of Boltzmann Constant R is the universal gas constant, whose value is 8.314 J/K-mol, and its dimensional formula is [M1L2T−2K−1].
What is K in Boltzmann’s formula?
where kB is the Boltzmann constant (also written as simply k) and equal to 1.38065 × 10−23 J/K. In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a thermodynamic system can be arranged.
What is the dimensional formula of charge?
Therefore, the electric charge is dimensionally represented as [M0 L0 T1 I1].
What is the SI unit of charge?
coulombThe SI derived unit of electric charge is the coulomb, which is defined as an ampere second.
What is dimensional formula of time?
The equations obtained when we equal a physical quantity with its dimensional formulae are called Dimensional Equations. The dimensional equation helps in expressing physical quantities in terms of the base or fundamental quantities….Dimensional Equations.Physical QuantityDimensional EquationTime Period of wave[T] = [M0L0 T-1]6 more rows
What is r in pV nRT?
Pressure is inversely proportional to volume: = , where a > 0 is a constant. … The ideal gas law is: pV = nRT, where n is the number of moles, and R is universal gas constant. The value of R depends on the units involved, but is usually stated with S.I. units as: R = 8.314 J/mol·K.
What is the significance of Boltzmann constant?
The physical significance of k is that it provides a measure of the amount of energy (i.e., heat) corresponding to the random thermal motions of the particles making up a substance. For a classical system at equilibrium at temperature T, the average energy per degree of freedom is kT/2. |
In April of 1513, Spanish explorer Juan Ponce de León set foot on the stretch of land we call Florida today. He was looking for gold, and also for the fountain of youth, a miracle spring that at the time was believed to exist in the New World. Landing not far from today’s St. Augustine, de León examined the coast, found neither the gold nor the fountain, and moved on after naming the place “Florida.”
Over five hundred years later, historians are still pondering why he chose that name, and whether he called his find La Florida or La Pascua Florida. Turns out, the difference is significant.
According to some historical accounts of de León’s journey, his crew landed in the future Florida on Easter Sunday. For these very religious Spaniards, the day of Jesus’s resurrection was one of the most sacred holidays of the year. In Spanish, Easter Sunday is often called La Pascua de las Flores—the festival of flowers. So a prominent early theory states that de León named the new land La Pascua Florida in honor of Easter Sunday. “Their religiosity was abundant; they wore it on their sleeves in those days,” says Roger Chapman, history professor at Palm Beach Atlantic University, who traced the journey of de León’s crew. They always attributed things to God in one way or another, he adds. “It was part of their culture.”
But some historians questioned whether de León indeed set foot on Florida’s soil on the day associated with the resurrection. For example, one paper notes that the crew sighted the landmass on March 27, 1513, which was Easter Sunday—but wasn’t able to reach it until a few days later in early April. Plus, some historical texts originally listed de León’s sailing year as 1512, which was later proven to be incorrect. Had de León indeed travelled in 1512, he could have reached Florida on an Easter Sunday. But if he set sail in 1513, his arrival wouldn’t correspond with the holiday, some historians noted.
According to another hypothesis, the state earned its name because of its lush vegetation and beautiful blossoms. Chapman notes that writer Peter Parley, who composed an 1860 history and geography textbook, wrote that Florida “received its name from the abundance of wild flowers that flourished upon its soil.” De León arrived in the middle of spring, when Florida, now famous for its botanical beauty, was in full bloom. Smitten by the abundance of plants, flowers, and colors, the explorer might have called the land not La Pascua de las Flores, but rather La Florida—“the place of flowers.”
That theory certainly has merit. Florida boasts about 3,300 native species of plants, says Thomas Chesnes, professor of biology at the Palm Beach Atlantic University. That’s not counting over 1,000 more plant species that were naturalized here, thanks to the state’s geographical location.
“We are essentially a bridge between the temperate environment of the north and west, and the tropics and south,” Chesnes explains. That’s why so many plant species are able to thrive here, he adds, some blooming year-round and others seasonally, reaching their peak in spring or summer.
When de León arrived, flowers were likely everywhere. His crew didn’t even have to venture far from the ship. Just on the beach, they would have seen the bright yellow beach sunflowers and big purple beach morning glory. At night, large white moonflowers would open up. And if the crew members chose to explore further inland, they would have found more wildflowers, very different from what they had back home, Chesnes points out. Coming from Europe, seeing these lush and unusual tropical plants would have been impressive.
So does Florida owe its name to Christ’s resurrection, or its botanical beauty? The early chroniclers of de León’s journey chose the former point of view, but as American culture becomes more secularized, historians may be favoring the latter. The state certainly deserves to be named after its biodiversity. As Chesnes says, “Florida is an interesting melting pot of botany.” |
Biliteracy is an enrichment model that promotes bilingualism, English proficiency, and academic achievement for all students. Rather than provide instruction exclusively in Spanish, a biliteracy approach gives students a chance to develop literacy skills in both English and Spanish. The strategic linguistic resources in the Biliteracy Pathway, when used with ReadyGEN, support what the research shows—when students analyze similarities and differences in two languages, their skills can exceed those of monolingual children.
All language knowledge is an asset, not a deficit. Bilingual students acquire language differently than monolinguals, strategically using linguistic resources from both languages. The ReadyGEN Biliteracy Pathway provides a complete set of learning resources to help students develop literacy and linguistic skills in both languages.
A Performance-Based Assessment (PBA) appears at the end of each Biliteracy module in the same genre as in English (narrative, informative, or opinion). The Biliteracy PBA was designed to provide a meaningful comparison if students complete PBAs in both English and Spanish.
Authentic Spanish Texts The trade books for each grade were chosen according to a range of criteria, including: |
You would never guess that a bunch of tiny little metal balls could turn into mindblowing art. But as this video shows, when all those teeny balls are magnets, they stick together to make amazing shapes. They’re called Nanodots. First the hands make lots of hexagons, then stacks them to make an “octahedron” – a 3D shape with 8 faces (just like an octagon is a flat shape with 8 sides). The octahedron is just one of 5 cool chunky shapes you can make where every face is the same shape with all equal edges: it has 8 triangles. Then the hands make a dodecahedron, which has 12 pentagons as faces. Watch to see what else the Nanodots can make!
Wee ones: How many sides does a triangle have? Hold your hands together to make a triangle hole with your fingers and thumbs!
Little kids: How many sides does a pentagon have? Bonus: If you’ve made 10 of the 12 pentagons to make the dodecahedron, how many more do you need to make?
Big kids: Each triangle in the octahedron has 7 Nanodots in the longest row, with 6 Nanodots above that, 5 above that…all the way to 1. How many Nanodots does one triangle have in total? Bonus: The opening shows 3 octahedrons already made. How many triangle faces do they have altogether?
The sky’s the limit: If each octahedron face has 7 Nanodots on the edge, then 6 in the next row up, and so on, how many dots in that triangle aren’t on the edge? See if you can figure it out without counting them in the picture!
Wee ones: Make a triangle with your hands! It has 3 sides.
Little kids: 5 sides. Bonus: 2 more pentagons.
Big kids: 28 Nanodots. Bonus: 24 triangle faces.
The sky’s the limit: Just 10 Nanodots. Only the 4 middle dots in the row of 6, plus the 3 middle ones above that, 2 above that, and finally the last one (the middle dot in the row of 3). |
The Convention on the Rights of the Child is an international treaty specifically recognizing and protecting the rights of children. This means that all of the countries signing it have agreed on important rights that children must be guaranteed, so that they can grow up with access to education and health care, and can participate in the life and decisions of their families and communities.
The Convention is based on four major principles:
Non-discrimination: All children, everywhere in the world, have the rights described by the Convention, wherever they come from, and whatever their characteristics and situations.
The best interests of the child: When making decisions, adults must above all consider how children will be affected, and must do what is best for them.
The right to survive and develop: All children have the right to life, and governments shall ensure their survival and development.
Children’s views: Children have the right to express themselves freely on all subjects concerning them. Their views are to be taken into account in a manner appropriate to their age and maturity. |
Bible Crafts and Games About Anger
"In Your Anger Do Not Sin" Bible Crafts and Games
"Be ye angry, and sin not:" let not the sun go down upon your wrath:" Eph. 4:26, KJV
In this lesson children learn that anger is an emotion that everyone experiences. It is not a sin to be angry, but it is a sin to use your anger to hurt others, yourself, or to destroy things. They learn from Jesus' example how to handle anger in a constructive way. The teacher presents seven mad monsters and gives examples of things children can do to take control before the mad monsters do.
Mad Monsters Bible Verse Coloring Sheet
1. Before class print out the coloring sheets and make copies.
(The patterns for this craft are available to members on The Resource Room in both KJV and NIV.)
2. In class give your children the coloring sheets and crayons or colored pencils. As they work ask them why they think the the mad monsters are mad. Ask your students if they ever get mad and why they might get mad.
If you don't have a lot of students, you can use the black and white Mad Monster Posters below. Have your children color them and them use them during the lesson.
1. Before class print out the pattern onto card stock (Heavy Paper). (The patterns for this craft are available to members on The Resource Room in both KJV and NIV.)
2. Cut pointers from another piece of card stock and punch holes at the ends.
3. In class have your children color the Mad-O-Meters.
4. Punch a hole in the center of the Mad-O-Meters and attach the pointers with brass brads.
5. Instruct your children to take the Mad-O-Meters home and use them when they get angry to help them remember not to sin when they are angry. Tell them to try to find solutions to their problems before the meter reaches Mega Mad.
I wanted to let you know that I did the Anger lesson with the Hills Brothers' containers and the kids loved the it. Some made 2 & 3 monsters. I planted grass in them 2 weeks ahead a time so they had a full head of hair. I meant to take pictures but it got so hectic at the end of class I forgot to do it. Again thank you for the great lesson and craft. Vicki
Growing Mad Monster Craft
Children use this Growing Mad Monster to help them control their anger before it controls them.
What you will need:
Plaster Container or Can - The container pictured is a red Hills Brother’s Cappuccino container.
Red Paint (Optional) and Red Paper
Large Wiggle Eyes
Liquid Chalk Markers or Permanent Markers
Glue and/or Tape
Fast Growing Seeds Such as Beans, Sprouting Seeds, Clover, Grass, etc.
How to Make:
1. Before class print out the arm patterns (Available to members) onto red paper or card stock and cut them out. If you are using a can or other container, you can tape red paper onto the can or spray paint them red before class. Or just leave them their original color. Use paper that matches the container color for the arms.
2. Print out the back labels and cut them apart. (Available to members) The labels read, "If you feel like your mad monster is beginning to control you, take charge before it does. Use this Growing Mad Monster to redirect your anger by styling its hair. Once you have calmed down talk about why you were angry and try to resolve the conflict in a calm manner. ©2015 - www.daniellesplace.com (Please include the copyright if you use these craft labels.)
3. In class have your children draw a mad face on the container using the chalk markers or permanent markers, glue on wiggle eyes, and tape the arms to the sides of the container.
4. When they are done have them fill the container with dirt and add seeds or a plant.
What Does an Angry Person Look Like?
Have your children pose as an angry person. A child may pretend to be: shouting, pointing his finger,clinching his teeth, making his hands into fists, stomping his feet, tensing up, yelling, crying, kicking something, hitting something, rolling on the floor, frowning or punching.)
Mad Monster Game
Use his printable board game to help your children talk about their anger and learn how to deal with it in a positive way.
Children learn to control their 'mad monsters' with this simple game. When a child lands on a monster he gives an example of why he got angry when that type of 'mad monster' was controlling him. For example, if a child lands on the "Gimme Jimmy" Monster, he might say, "I got mad when my mom wouldn't buy me the toy I wanted. I yelled and whined until my mom threatened not to take me to the store any more." The child should then say how he could have handled his anger better.
There are seven monsters: "It's Not Fair" Fergus, Hungry Hank, "I Don't Wanna" Walter, Owie Howie, Frustrated Fred, Gimme Jimmy, and Jealous Jill.
The game board measures 14" x 10" and uses two sheets of card stock. It comes with directions, printable monster cards, and suggestions about how children can deal with their anger in positive ways.
This game is available to members on The Resource Room or as an Instant Download.
Eph. 4:26 Bible Verse Review Activity Sheet
Children try to guess the letters that spell out the Bible verse before the sun goes down. Each time they guess a letter and it is wrong the sun strip is moved down one space. If the sun goes down before they guess the words to the verse, they lose.
I taught this lesson "In Your Anger, Do Not Sin". A few weeks ago. It was right after the ruling on gay marriage and the SC shooting and doing away with the rebel flag. I had it planned way before all this happened and it was just a blessing from God to teach that day. The kids loved and it helped them to understand some things. I teach both 4 & 5 year old children for Sunday School and 1st-2nd grade for Children's church. It went over so good. One of the other teachers borrowed it to teach his class of 3rd-6th grade. It gave them some understanding about how to deal with what was going on around them. They loved playing the game as well and we did the funny monster planter too! I used Herb seeds so that it would be something useful in the home. Most kids don't get to keep there crafts if they aren't useful. Thank you for all the wonderful lessons! I love them! |
How to create educational experiences that support how people learn
Learning is an active and social process. For deep learning to occur, instruction needs to access and connect to prior knowledge and give learners choices and responsibilities in their own learning experience. The learning cycle is a research-based instructional model that focuses on ordering phases of an activity to support learning. The model presented in this session is based on a five-phase cycle: invitation, exploration, concept invention, application, and reflection. The session itself—like all BEETLES sessions—is also based on the learning cycle model, so participants experience the model at the same time as they’re learning about it. The learning cycle is a transformative tool for curriculum design, and this session is highly recommended if you’re planning on a program initiative in staff curriculum design or re-vamping.
Goals for this session are:
- Discuss the benefits of sequencing different stages of an activity strategically to achieve engagement, meaning-making, and in-depth learning.
- Learn about an effective model for instruction known as “the learning cycle,” and gain the ability to make learning cycle-based instructional decisions.
- Learn how the learning cycle can be applied to short, medium-length, and longer field experiences.
- Practice implementing the learning cycle by applying it to the planning of a short field experience for students.
View and Download Materials:
Notes: Professional learning videos are intended to support program leaders, not as online learning experiences for field instructors. This video was edited to focus on how the program leader leads the session; the actual session is much more participant-focused, and participants spend most of the session exploring and discussing ideas with their peers. The script and this video don’t always agree. We recommend you follow the script if you notice a discrepancy. |
The 100th day of school is more than just a milestone worth noting – it’s a rich learning opportunity where all the core competencies can come into play. Celebrating the 100th day has become a classroom tradition in many schools, especially in the primary grades. Teachers and students in classes all over the province, perhaps the country, celebrate their 100th day at school.
What is the 100th Day of School?
The 100th day of school is literally the 100th day of the school year. From the very first day of school, many classes keep track of the number of days they’ve been in school in anticipation of the 100th day.
Days are often kept track of by counting straws, or any item for that matter, ten of which become a “ten bundle,” providing ongoing opportunities for counting by tens and ones and developing number and place value concepts. Throughout the day, students work on a variety of engaging math, art, and thinking activities that focus on different concepts of the number 100. The 100th day has become a vehicle to invite children to think, communicate, make choices, problem solve, reflect on their strengths and abilities, and feel good about learning and challenging themselves; in addition, the core competencies can be embedded naturally in the curriculum.
By the time a child reaches grade three, he or she has conceivably celebrated three consecutive school years of his or her 100th day at school. These children will have been engaged in developmentally appropriate activities that have them working with the number 100, counting by 100, and making and sharing collections of 100, and other meaningful connections. By the time they enter grade three, children are ready for new challenges.
A 100th Day Learning Story
In this learning story, I share my past experiences and observations celebrating the 100th day of school with older students and their families.
Sharing a quality piece of literature is a wonderful way to begin a new learning journey. On our 95th day of school, to set the stage and get ready for our 100th day, I have read aloud, Margery Cuyler’s heart-warming picture book, 100th Day Worries. This picture book integrates many big ideas and important themes and messages to think about, talk about, write about, and explore.
The students are then invited to think “outside the box”. We first talk about what this saying means. I ask the children: “Have you ever heard the phrase ‘think outside the box’?” “What do you think it means?” For some this idea is completely new; for others, they can move to discuss its meaning. The children talk about the concept of “thinking outside of the box” and share their ideas and examples:
“It means you have to look “outside of the box” and try to think of things beyond the obvious.”
“You want us to think imaginatively and come up with a different, unusual, out of the ordinary collection.”
The students seem to understand their new challenge. I want them to think of new ideas instead of the traditional or expected ideas. I want them to think of a creative or unique way to show 100 – something beyond the usual collection of 100 items.
Unique 100 Day Collections
Every year I have noticed that some students are excited about the possibilities and begin to brainstorm ideas, ready and willing to take on the challenge. Others find the idea too abstract and out of their reach, but with time to talk, we dig deeper and together clarify the learning intentions; the “open-endedness” of the project allows every child to succeed.
I sometimes make this a home project, inviting children to work with their families and enjoy the experience together. I send home an invitation outlining the criteria. I know this will challenge some families, but I also know it will be a welcomed home project for many. My belief about involving parents in meaningful and enjoyable activities with their children is an important condition in moving a child’s learning forward. My hope is to engage families in rich conversations and work at home. It becomes a learning experience not only for the child, but also for his or her family members.
For most, the idea of thinking “outside of the box” is definitely new in the context of the 100 day collection. But the idea, or product, definitely has some value in a variety of ways and contexts. This task is all about learning. The students’ 100 Day thinking outside-of-the-box collections challenge them to create ideas that are novel and new.
In the BC Ministry’s outline for the Thinking Competency Profiles, it states “The idea or product may also have value in a variety of ways and contexts – it may be fun, it may provide a sense of accomplishment, it may invite problem-solving, it may be a form of self-expression, it may provide a new perspective that influences how people think about something or the actions people take.” This is exactly what it is.
The engagement and success of the children taking on this challenge is impressive. As the week progresses, children share their ideas and the buzz is palpable; they can hardly wait for the day to present their projects
On the day of presenting their “collections”, students are also asked to talk about how they came up with their ideas and how they went about putting them together. The Communication Competency Profiles are successfully being addressed and met.
Everyone’s presentation is then captured on video and uploaded on to the child’s digital portfolios. The learning is made visible and communicated out; students also reflect on their projects, assessing their work in all core competencies profiles.
Here are some students’ unique “collections” from years past:
100 Hellos and Flags – Adam shared his video of him saying “hello” in 100 different languages as he pointed to the 100 different country flags. Brilliant!
100 Shaped Cookie – Ishan made a large cake-sized cookie out of 100 grams of cookie dough with 100 dots of icing, shaped like the number 100.
What city is exactly 100 km from my house? – Lauren researched and calculated that the Vancouver Island City of Duncan is 100 km away from her house in Surrey. She made a map and showed the routed with coloured pins. She also used 100 push pins outline the route.
100 Faces – Mirin made a very entertaining and creative Youtube video of herself making 100 different faces and posted it on her class blog page. Her video played for exactly 100 seconds.
The Karman Line – one student shared images on the computer of the Karman line, the imaginary division between the Earth’s atmosphere and outer space, the distant is 100 km above sea.
100 Moles of Water – another student shared the scientific concept of “a mole” the unit of measurement used in chemistry. I don’t know if the calculations were accurate, but it sure was interesting.
100 Watt Light Bulb – and a student brought a 100 watt light bulb and challenged her classmates to some math calculations using the information on the light bulb box, for example, how many 100 watt light bulbs would it take to light 100 days of darkness?
Here are some former students’ videos:
I can write an engaging one hundred page story:
I can use one of our math challenge tasks to show a 100 value word in celebration of our 100th day at school:
I can share all the different way 100 is part of our lives:
Like many rich and meaningful events and experiences that happen in classrooms, the 100th day can become a vehicle to invite children to think, communicate, make choices, problem solve, reflect on their strengths and abilities, and feel good about learning and challenging themselves. It is obvious that even the Personal and Social Competency Profiles are embedded into the learning journey as important foundations to helping students become confident, life-long learners.
Part of our 100th day celebrations and learning have also included many different math activities and games that have focused on different concepts of the number 100.
Other 100 Day Activities and Tasks
I have invited and engaged students to work on other tasks and investigations, whole-class projects as well as tasks that the children could choose to work on alone or with a classmate throughout the week.
100 Lego Pieces – A Building Challenge
This challenge began with all of us sitting in a circle on the carpet. As I dumped out our 3 large baskets of Lego, the children’s excitement and anticipation was obvious. With bags in hand, children were instructed to count out ten Lego pieces. This process continued in a game like fashion with me asking problem-solving questions and student calculating the multiple of ten answers which directed them to choose their Lego, working to fill their bags with 100 pieces. Once their bags were filled, their task was to use their 100 pieces to build a structure keeping three criteria in mind: the structure had to be one, 100 piece Lego structure which could stand on its own; the structure had to be sturdy so when transported it would not break; once complete, the structure had to be given a name and function.
As I watched the children working around the room, I noticed some quickly moved to the task. Some began sorting their Legos by colour and size, others just dumped their bag and proceeded to build. Many carefully focused and constructed their structures; some were simple, others more complex. I noticed two or three students who simply could not get started; they struggled with coming up with an idea and became frustrated with the task. Some built structures that used only a small number of Lego pieces and then wanted to stop; others were tempted to go to the Lego baskets and exchange their pieces.
Half way through we stopped to debrief and discuss the strategies and challenges students were using and facing. We talked about the qualities that would help someone succeed at such a task: perseverance, not wanting to be perfect, letting go of an idea, making changes, a positive attitude, to name a few. We also talked about professions that require such focus and commitment to solve a problem and work under certain constraints and expectations. Our conversation was relevant and meaningful. Re-energized the students went back to work.
100 year Hopes and Promises
Children discussed and shared their thoughts about some of the challenges, problems, and issues people in our world face today. Some responded to this response task on their blog pages: In one hundred years what would be your hope for our world, for the people and all living things. In 100 years I hope . . .
. . . there will be no wars and there will be peace in the world and countries will share their creations to make us all equal and have the same opportunities.
. . . that people won’t be homeless.They won’t be hungry and they will have enough money to live like those who do.
. . . that people all around the world think about the Earth and help keep it clean by doing the things that they know can make a difference.
100 Day Investigations
Children worked together using I-pads to research and complete this inquiry task, making lists and charts to share with the class:
Think about. Identify. Write about and illustrate things you can find in our world that are 100 years old or older.
Castles, bridges, Disneyland, turtles, planets, tea bags, escalators, cellophane, instant coffee, windshield wipers, crossword puzzles, parachutes, traffic lights, pyramids, trees, furniture, books, paintings, countries, and people.
Telephones but not televisions. Airplanes but not jets. Movies but not sound or colour. Ovens but not microwaves. Board games but not video games. Phones but not cellphones. Wooden toy blocks but not Lego.
This task interested one student to work on an independent inquiry about the 100 Year War. Here is one piece of his project he presented:
100 Day Poems
Some children also chose to write 100 day poems:
Is there a poet inside you? Write a poem to celebrate our 100 days. Be creative. Have some fun with words and images.
Hip, hip, hooray! It’s our 100th day! Here at school, there’s lots to do. Crafts to make. Games to play.
Ask a question. Research why? Design a 100th day I spy?
Roll a dice? Investigate? Build, or paint, or calculate?
All to 100! Hip, hip, hooray!
Reflections on 100th Day Activities
As I reflect on past 100 day activities and celebrations that I have shared and enjoyed with students and their families, some very important learning intentions and goals can be identified. Children explored the world around them and communicated their experiences and ideas through a variety of medium and means. They inquired into topics that interested them, and related to their lives and experiences. Many used technology to present their collections and 100 day tasks.
Students connected and engaged with others, sharing and developing ideas. They acquired, interpreted, and presented information. They collaborated with each other and their families to plan, carry out, and accomplish a goal. Students told about their experiences, reflected on, and shared what they learned.
To all those teachers and students who recognize and celebrate their 100th day in school, I invite you to share your learning stories and how you have successfully embedded the core competencies into your learning intentions and curriculum activities, engaging your students in meaningful and relevant learning experiences. Bravo!
Happy 100th days! |
What Do You Know About Your Heart?
posted by The Live Better Team | December 25, 2017
The heart is in charge of providing oxygen and nutrients throughout the body by pumping blood, and it beats about 100,000 times per day for the average person, which equals about 2,000 gallons of blood. You probably don’t pay much attention to your heart as long as it’s working, but it’s good to know a little more about your heart’s structure and function.
Every time your heart beats, it pumps blood through a system of blood vessels called the circulatory system. These vessels provide the method to get oxygen and other nutrients throughout your body, and also remove waste. There are three primary types of blood vessels:
Let’s go over some basics of the heart’s structure, both inside and outside the organ itself:
Outside: The heart is located under the rib cage, slightly to the left of the breastbone and between the lungs. It’s made of muscle, with strong walls that contract to pump out blood, and a surface covered in coronary arteries, which supply oxygen-rich blood directly to the heart. Major blood vessels entering the heart are the superior vena capa, the inferior vena capa and the pulmonary veins. Primary arteries leaving the heart are the pulmonary artery and the aorta.
Inside: The heart has four chambers, and is a hollow organ inside. A muscular wall called the septum separates it into a left and right side, and each of these is divided into a top and bottom chamber (called atria and ventricles, respectively – the atria receive blood from veins while ventricles pump blood into arteries).
As blood leaves each heart chamber, it passes through one of four valves with a set of flaps designed to stop blood from flowing the wrong way.
The right and left sides of the heart work together, the right side bringing in oxygen-depleted blood to the right atrium, moving it to the right ventricle, and eventually passing it to the lungs where blood can deposit waste products (to be exhaled as carbon dioxide) and refill its oxygen supplies.
From the lungs blood flows into the left atrium, through the valve to the left ventricle, and out through the aorta to send oxygen-rich blood back out into the body.
The heart is made of tissue that, like the rest of our body, requires oxygen and nutrients, and while its chambers are constantly pumping blood, none of that blood remains in the heart. It must receive its own supply of blood, and it does this via a network of arteries called the coronary arteries. There are two major coronary arteries:
The heart “beats” when atria and ventricles work in an alternating contracting and relaxing format. This is powered by the electrical system in the heart, with specialized cells that initiate an electrical impulse through the walls of the atria, causing muscle contraction. A cluster of cells between the two chambers slows the electrical signal so the atria can finish its contraction before the ventricle begins contracting.
At rest, the heart will beat around 50 to 99 times per minute. Factors like exercise, emotions, fever and certain medications may cause it to beat faster.
If you have questions about your heart, or you’re concerned that it might not be working the way it should, talk to your doctor to learn more.
Revere Health Imaging offers the most advanced imaging technology in Utah Valley with convenient locations and reduced-cost exams. We even offer our imaging services at night for your convenience. Contact us today at 801-812-4624 for an appointment!
“How the Heart Works.” WebMD. https://www.webmd.com/heart-disease/guide/how-heart-works#1
“How the Heart Works.” National Heart, Lung and Blood Institute. https://www.nhlbi.nih.gov/health/health-topics/topics/chd/heartworks
This information is not intended to replace the advice of a medical professional. You should always consult your doctor before making decisions about your health. |
On this day in 1783, future President George Washington, then commanding general of the Continental Army, summons his military officers to Fraunces Tavern in New York City to inform them that he will be resigning his commission and returning to civilian life.
Washington had led the army through six long years of war against the British before the American forces finally prevailed at the Battle of Yorktown in 1781. There, Washington received the formal surrender of British General Lord Charles Cornwallis, effectively ending the Revolutionary War, although it took almost two more years to conclude a peace treaty and slightly longer for all British troops to leave New York.
Although Washington had often during the war privately lamented the sorry state of his largely undisciplined and unhealthy troops and the ineffectiveness of most of his officer corps, he expressed genuine appreciation for his brotherhood of soldiers on this day in 1783. Observers of the intimate scene at Fraunces Tavern described Washington as “suffused in tears,” embracing his officers one by one after issuing his farewell. Washington left the tavern for Annapolis, Maryland, where he officially resigned his commission on December 23. He then returned to his beloved estate at Mount Vernon, Virginia, where he planned to live out his days as a gentleman farmer.
Washington was not out of the public spotlight for long, however. In 1789, he was coaxed out of retirement and elected as the first president of the United States, a position he held until 1797. |
Exoplanets offer a chance to teach scientists about how worlds are born. The recent explosion in exoplanet discovery has led to a wide array of planets, all showing different pathways to formation. Analysis of a new discovery known as Kepler-107c offers something never seen before: a planet comprised mostly of iron, what scientists estimate could be as much as 70 percent.
Scientists from Italy's National Institute for Astrophysics (INAF) and the University of Bristol have been studying the exoplanetary system Kepler-107 through the in La Palma, Spain, an ideal location for observatories due to its remote island nature and mountainous height.
Discovered by NASA in 2014, the Kepler-107 system doesn't seem to be following the commonly accepted understanding of how a solar system works. The standard model is humanity's own system: The inner planets are rocky and solid, outer planets are gaseous. This pattern, influenced by how close planets are to their star's heat, creates another pattern: Planets drop in density as they're farther away from their star (although they pick up again when they get really far out and freeze solid).
Kepler-107 doesn't seem to be playing by these density rules. Kepler-107b, the second planet in the system, has a density similar to Earth. Considering how the Kepler-107 system has a star comparable to the Earth's sun, that makes sense. But then comes the third planet in the system, Kepler-107c, which appears to be at least twice as dense as 107b, which is highly unusual.
Scientists suspect that its density comes from a 70 percent iron core. How'd it get there? Through an intensely violent head-on collision at the planet's formation, although a series of smaller collisions is also a possibility.
"Giant impacts are thought to have had a fundamental role in shaping our current solar system," explains Bristol's Dr. Zoe Leinhardt, a computational astrophysicist and coauthor of the study's paper, in a . "The moon is most likely the result of such an impact, Mercury's high density may be also, and Pluto's large satellite Charon was likely captured after a giant impact but until now, we hadn't found any evidence of giant impacts occurring in planetary systems outside of our own.
"If our hypothesis is correct, it would connect the general model we have for the formation of our solar system with a planetary system that is very different from our own."
There are similarities to Kepler-107c in our own solar system: Mercury, the closest planet to the sun, that covers around 85 percent of its body.
Planet-size impacts are suspected to have occurred in our own solar system as well. The origins of Mercury's metallic core, for example, is still unknown. As recently as last year, scientists have could have been responsible.
The distant Kepler-107c, around 1,670 light-years from Earth, might be able to inform scientists about the formation of Earth's nearby neighbor. When the European Space Agency's BepiColombo mission finally reaches the tiny, hot planet, it might be able to answer questions that resonate throughout the universe. |
Could chocolate help our kids become better global citizens? Believe it or not, it can. It all depends on how the cocoa is grown, how it’s bought and sold, and what our children are taught about this favorite treat. When the cocoa is sourced one way—from organic farms and bought on “fair trade” terms—chocolate can actually help our children grow into better-informed, more conscientious global citizens. Produced another way, chocolate might offer nothing more than empty calories for your family and trouble for cocoa-growing communities in tropical countries.
How can children learn these lessons from chocolate? Equal Exchange, a pioneer in fair trade, introduced a school curriculum focused on the subject with an emphasis on the small-scale farmers who grow most of the world’s cocoa—chocolate’s key ingredient. Included in the curriculum are inspiring firsthand accounts from farmers and fair traders, photos from cocoa-farming communities, role-playing games, hands-on projects, exercises for kids to start their own co-operative, as well as math, art, and social studies, all designed to meet national teaching standards.
The 16-unit curriculum, appropriate for grades four through nine, covers:
- Where food comes from, hunger, child labor
- What is fair trade? How is cocoa grown, bought, and sold?
- What is cooperative economics? How do cooperatives promote fairness?
- Activities to identify problems, brainstorm solutions, and carry out projects
Equal Exchange (www.equalexchange.com) works with schools nationwide through its unique fundraising program, in which schools, to raise money for their own programs, sell fair-trade products instead of wrapping paper or magazine subscriptions.
What Is Fair Trade?
Fair Trade is an equitable trading partnership in which companies negotiate directly with the growers and producers of products such as coffee, chocolate, and sugar. It contributes to sustainable development by securing the rights of otherwise marginalized producers and workers. Companies agree to pay a price and follow procedures that meet the needs of small growers, regardless of fluctuations in world markets. |
Trogloxenes, troglophiles and troglobites call different parts of the cave home. The environment at the mouth of the cave differs greatly from the environment deep inside the cave. A cave has several zones.
The entrance zone environment is closest to the environment above ground. It receives sunlight and has variable temperatures and green plants. Many animals like raccoons or bears utilize this space to eat their food, sleep or nest. In the entrance zone, you'll find organisms like moss, ferns, owls, snails and salamanders.
Venture a bit farther into the cave to enter the twilight zone. In the twilight zone, there's less light, so plants don't really grow there. The temperature remains a bit more constant but may still fluctuate in conjunction with weather aboveground. Organisms living in the twilight zone need moisture and coolness to survive. Here, you'll find the habitats of many trogloxenes, including moths, bats, spiders, millipedes and mushrooms. The animals found in the twilight zone usually leave and enter the cave at will.
Travel even deeper into the cave to experience the dark zone. In the dark zone, there is no light whatsoever. The temperature remains constant. Troglobites live in the dark zone. These organisms have undeveloped eyes, poor pigment and long antennae because they've adapted to live in this environment.
How do the organisms living in the dark zone survive? What do they eat? Read on to find out. |
A-level Applied Science/Colour Chemistry/Paint/Resin< A-level Applied Science | Colour Chemistry | Paint
Typical binders include synthetic or natural resins such as acrylics, PVA, polyurethanes, polyesters, melamines, epoxy, or oils. There are different kinds of binders: those that simply "dry", and those that undergo polymerisation reactions. Binders that dry form a solid film when the solvent evaporates. Those which polymerise form irreversibly bound networked structures, which will not redissolve in the solvent.
In oil-based paint, curing takes the form of oxidation, for example oxidation of linseed oil to form linoxin to create a varnish. Such oils are called siccative oils. 'Boiled' linseed oil has been treated to make it oxidise faster. Solvents and siccative catalysts can be added for this purpose.
Other common cured films are prepared from crosslinkers, such as polyurethane or melamine resins, reacted with acrylic polyester or polyurethane resins, often in the presence of a catalyst which serves to make the curing reaction proceed more quickly or under milder conditions. These cured-film paints can be either solvent-borne or waterborne.
Gloss paints contain alkyd resins. The monomers react to form a branched polyester. Typical monomers are benzene-1,2-dicarboxylic (phthalic) acid and propane-1,2,3-triol (glycerol)..
Emulsion paint is a water-based emulsion of solid monomers: ethenyl ethanoate (vinyl acetate, PVA monomer) and/or propenoate (acrylic) esters.. Americans call this latex paint, although latex rubber is not an ingredient. When the water evaporates, the monomer undergoes addition polymerisation to form a solid film. The polymer itself resists water (and typically some other solvents). Residual surfactants in the paint as well as hydrolytic effects with some polymers cause the paint to remain susceptible to softening and, over time, degradation by water.
Epoxy resin paints are highly chemically resistant and tough.
Nitrocellulose resin is used in car touch-up spray paint and in wood finishing.
Bitumen-based paints are highly water resistant.
- The Essential Chemical Industry (1985) Polytechnic of North London |
- Trees continue to keep our air supply fresh by soaking up carbon dioxide and generating oxygen.
- The volume of oxygen made by an acre of trees each year equals the amount consumed by 18 people yearly. One tree generates almost 260 pounds of oxygen each and every year.
- One acre of trees eliminates up to 2.6 tons of carbon dioxide every year.
- Shade trees will make buildings up to 20 degrees cooler during the summer.
- Trees decrease air temperature by evaporating water inside their leaves.
- The cottonwood tree seed is the seed which stays in flight the longest. The little seed is encompassed by ultra-light, white fluff hairs which will carry it in the air for a few days.
- In a single year, an acre of trees can take in as much carbon as is created by a car driven approximately 8700 miles.
- Trees supply shade and shelter, decreasing yearly heating and cooling expenses by 2.1 billion dollars.
- Trees decrease air temperatures by evaporating water inside their leaves.
- The typical tree in a city area survives just about 8 years!
- A tree doesn't get to its most fruitful stage of carbon storage for about ten years.
- Trees decrease noise pollution by serving as sound barriers.
- Tree roots strengthen the soil and stop erosion.
- Trees enhance water quality by slowing and filtering rain water in addition to protecting aquifers and watersheds.
- Trees shield you from downward fall of rain, sleet, and hail in addition to reducing storm run-off and the potential for flooding,
- Trees supply food and shelter for animals.
- Trees situated alongside roads behave as a glare and reflection control.
- The death of one 70-year old tree would return over 3 tons of carbon to the atmosphere.
Acorn Tree Care Learning Center
Human response to trees goes well over and above merely noticing their splendor. We feel peaceful, calm, restful, and tranquil inside a grove of trees and shrubs. We're “at home” there.
The soothing effect of nearby trees and city greening can substantially decrease office stress levels and fatigue, relax traffic, and even reduce the recovery time needed following surgical treatment. Trees also can minimize crime. Apartment buildings with high amounts of green space have lower crime rates than nearby apartments with no trees.
The prominence, strength, and endurance of trees provide them with a cathedral-like value. Due to their potential for long life, trees are often selected and planted as living memorials. We sometimes become personally attached with trees that we, or those we love, have grown.
The powerful tie between individuals and trees is usually obvious when community residents speak out against the elimination of trees to expand streets or move to save an especially large or historical tree.
Even when situated on a personal lot, the advantages supplied by trees can reach well out into the encompassing community. Furthermore, large-growing trees can come in conflict with power lines, views, and buildings which are past the bounds of the owner’s property. With appropriate selection and upkeep, trees can easily enrich and function on one property without infringing on the rights and privileges of neighborhood friends.
City trees frequently serve a number of architectural and engineering functions. They offer privacy, highlight views, or screen out undesirable views. They decrease glare and reflection. They direct walking traffic. Trees offer background to and also soften, complement, or improve architecture.
Trees provide natural elements and wildlife habitats into city surroundings, all of which improve the quality of life for residents of the town.
Trees affect the environment in which we live by moderating local climate, enhancing air quality, decreasing storm water run-off, and sheltering wildlife. Neighborhood climates are moderated from intense sun, blowing wind, and rainwater. Radiant sunshine is soaked up or deflected by foliage on deciduous trees during the summer and is only filtered by limbs of deciduous trees during winter. The larger the tree, the better the cooling effect. By making use of trees in the metropolitan areas, we are able to moderate the heat-island effect brought on by pavement and buildings in commercial locations.
Wind speed and direction is impacted by trees. The more compact the leaves on the tree or group of trees, the more efficient the windbreak. Rain, sleet, and hail are taken in or slowed down by trees, supplying some protection for individuals, pets, and structures. Trees intercept water, store a lot of it, and minimize storm water runoff.
Air quality is improved by using trees, shrubs, and turf. Leaves filter the air we take in by getting rid of dust and other particles. Rainwater then washes the contaminants to the ground. Leaves soak up the greenhouse gas carbon dioxide during photosynthesis and store carbon as growth. Leaves also take in other air contaminants - such as ozone, carbon monoxide, and sulfur dioxide - and put out oxygen.
Planting Trees and Shrubs
By planting trees and shrubs, we return developed locations to a more natural environment which is appealing to birds and wild animals. Ecological cycles of plant development, reproduction, and decomposition are again found, both above and under ground. Natural harmony is restored to the metropolitan environment.
- Lower Crime
The presence of trees in urban neighborhoods has been linked to reduced crime.
- Cleaner Air
Trees provide the oxygen we breathe. One acre of trees produces enough oxygen for 18 people to breathe each day and eliminates as much carbon dioxide from the air as is produced from driving a car 26,000 miles. Tree leaves help trap and remove tiny particles of soot and dust which otherwise damages human lungs and tree root networks filter contaminants in soils producing.
- Clean water
Forty trees will remove 80 pounds of air pollutants annually. That is, 4 million trees would save $20 million in annual air pollution cleanup.
- Energy savings
Trees lower the temperature through shade. The cooling effects of trees can save millions of energy dollars. 3-4 shade trees located strategically around a house can cut summer cooling costs by 30-50%. For one million trees, that's $10 million in energy savings.
- More public revenue
Studies have shown that trees enhance community economic stability by attracting businesses and tourists. People linger and shop longer along tree-lined streets. 40,000 trees in commercials parking lots would induce shoppers to spend 11% more for goods and services.
- Higher property values
Property values of homes with trees in the landscape are 5 - 20% higher than equivalent properties without trees. 4000 trees in yards would increase the sales price of homes by 1%, plus increase the property values as much as 10%. That is an estimated annual increase in homes sale value of $10.4 million.
- More efficient stormwater management
Roots stabilize soil and prevent erosion by trapping soil that would otherwise become silt. Silt destroys fish eggs and other aquatic wildlife and makes rivers and streams shallower, causing more frequent and more severe flooding. Trees along streams hold stream banks in place to protect against flooding. One tree reduces 4000 gallons of storm water runoff annually. 400 trees will capture 140,000 gallons of rainwater annually. That is, 4 million trees would save $14 million in annual storm water runoff costs. |
Megafauna dung key to soil health
Nutrient movers Large animals help move nutrients around the landscape in much the same way as arteries carry blood around our bodies, scientists have found.
And when those large creatures become extinct, these crucial "circulatory systems" are damaged, causing long-term harm to the health of soils.
These findings come from a new analysis of events roughly 12,000 years ago, when many large animals such as giant ground sloths and car-sized armadillo-like creatures became extinct in Australia and North and South America.
Ecologist Christopher Doughty from the Environmental Change Institute at the University of Oxford and colleagues developed a new mathematical model to calculate the effect of those mass extinctions on soil nutrients.
In the journal Nature Geoscience this week they estimate the extinctions reduced the dispersal of the vital nutrient phosphorus in the Amazon by 98 per cent, with far-reaching environmental consequences that linger to this day.
There were similar, less dramatic, effects in other continents, including Australia.
Until now, there has been no way to estimate the role of so-called megafauna on nutrient transport, Doughty says.
Using a mathematical model allowed them to estimate the impact of the large animals, using data from Africa about the relationships between the size of modern-day animals and their daily movement, food consumption and lifespan.
Bigger animals carry around larger amounts of vital nutrients such as phosphorus in their bodies and dung. Also, because they move larger distances than smaller animals, they have a key role in transporting those nutrients to areas where the soil is less fertile, the researchers say.
"Our study estimates that they play a huge, important role in transporting nutrients on continental scales over thousands of years," Doughty says.
"Arteries transport nutrients broadly throughout the human body and then capillaries transport the nutrients to smaller regions. We think that large animals played this role for the planet, broadly moving nutrients…from high concentrations to low concentrations."
"Therefore, when big animals go extinct, these arteries are severed leading to more nutrient poor regions, and possibly a less healthy planet overall."
The effect of the extinction was less severe in Australia than in South America because Australia had no elephant-sized animals, which meant its animals were not as big on average.
"Since we find bigger animals are disproportionately important in this distribution process, more smaller extinctions will have a smaller effect than fewer very large animal extinctions," Doughty says.
"However, because there were so many extinctions in Australia, it still has a very large role there too."
The scientists say their new mathematical model can also be used to predict the effect on soil fertility for thousands of years in the future if current endangered large species go extinct.
"This has a real value to people that we can now quantify and value," Doughty says. "If these animals go extinct, the health of the planet will suffer in quantifiable ways for thousands of years."
He says he's like to work with economists to put a monetary value to each large animal death based on their calculations.
"If we allow current endangered animals to go extinct, especially large ones like elephants, we will have a more nutrient poor planet in the future. The international community should recognise this and put efforts to preserve them in accordance to their value." |
The Pacific oyster is a delicious shellfish, but it has also become a dominant species in Dutch coastal waters, where it occurs locally in massive numbers. IMARES is conducting a study into the influence of this exotic species on the ecosystem.
Photo: Researchers on foot determine the density of an oyster beds at low tide in the Oosterschelde
The exotic Pacific oyster was introduced to the Oosterschelde as an alternative cultured species for the native flat oyster, which was suffering from disease. The new species thrived and expanded rapidly into the waters of the Delta and the Wadden Sea.
Oysters filter seawater in order to feed themselves with algae. Because they can filter a relatively large volume of water per hour, Pacific oysters have become a major food competitor for other shellfish. In the Wadden sea, the Pacific oyster has expanded greatly on existing mussel beds, but oyster beds have also developed at other locations, for example in areas with large amounts of shell material or on old cockle beds. The oysters and mussels can apparently coexist perfectly well in mixed beds. In the Oosterschelde, oyster beds actually enable mussels to settle on the tidal flats. Many other species seek protection or food in oyster beds.
Since the introduction of the Pacific oyster in the 1960s, the species has only been used commercially for oyster farming. In 2010, on behalf of the Ministry of Economic Affairs, IMARES began a study to determine if the wild oyster beds could also be exploited commercially, but with manual harvesting only. The aim of the study is to indicate whether harvesting wild oysters is economically feasible and whether it causes damage to natural habitats.
In the Delta region and the Wadden Sea, IMARES is conducting a survey on the tidal flats and in the channels to determine the biomass and area of the oyster population. The deep channels are surveyed with specially developed apparatus, while the tidal flats are surveyed on foot at low tide. In this way, the total quantity of oysters and their distribution can be determined. The survey will not only help monitor the manual harvesting (the oyster 'fishery'), but also the development of this exotic species and the possible consequences for the ecosystem. |
Never assume aspects of an artist’s work are meaningless, especially when it comes to Leonardo da Vinci’s doodles.
Until now, art historians had dismissed some doodles in the leading artist and intellectual’s notebooks. But now, a new study from Ian Hutchings, a professor at the University of Cambridge, proves there’s more than meets the eye. In fact, one page of these scribbles from 1493 actually contained something groundbreaking: the first written account demonstrating the laws of friction.
While common knowledge credits da Vinci with conducting the first systematic study of friction, as the Renaissance inventor incorporated friction into the behaviour of wheels, axles, and pulleys, recognizing its role in limiting operation and efficiency, no one actually knew how he came up with these ideas. But Hutchings was able to put together a detailed chronology that marks the very spot of da Vinci’s “Aha!” moment.
The notebook, which contains the page of seemingly meaningless scribbles penned in red chalk on a piece of yellow scrap paper back in 1493, is held in the Victoria and Albert Museum of London. The page in question was already the subject of academic debate years ago, regarding, in particular, the faint sketch of an old woman near the top, with the statement “cosa bella mortal passa e non dura,” which translates to “mortal beauty passes and does not last.” Beneath these works lay the sketches that were dismissed by a 1920s museum director as “irrelevant notes and diagrams in red chalk.”
Nevertheless, nearly a century later, Hutchings stirred up discussion on that very page once again, as he discovered that the rough geometrical figures drawn underneath the red notes show rows of blocks being pulled by a weight hanging over a pulley. This is the exact kind of experiment students would conduct today to show how the laws of friction work.
“The sketches and text show Leonardo understood the fundamentals of friction in 1493,” explains Hutchings. “He knew that the force of friction acting between two sliding surfaces is proportional to the load pressing the surfaces together and that friction is independent of the apparent area of contact between the two surfaces. These are the ‘laws of friction’ that we nowadays usually credit to a French scientist, Guillaume Amontons, working two hundred years later.”
Hutchings also revealed how da Vinci used his understanding of friction to create sketches meant for complex designs over the following two decades. He was aware of how useful and effective friction can be, and put his concept into practice through the works of wheels, axles, and pulleys, all of which were essential parts of his complex machines. He was, to say the least, centuries ahead of his time.
“Leonardo’s 20-year study of friction, which incorporated his empirical understanding into models for several mechanical systems, confirms his position as a remarkable and inspirational pioneer of tribology,” Hutchings notes.
This sort of fascinating news ought to encourage scientists and engineers to dive deeper into all of Leonardo’s old notes in hopes that other overlooked insights can be revealed.
Having trouble losing excess weight? This could be one of the biggest reasons why.
We know so much about food now yet much of the population is overweight and unhealthy because of the quality of our food and our perception about food.
Luckily there's a quiz that you can take to find out where you stand on food addiction. You can take it here.
After you will get results and specific information based on your score. Try the quiz! |
QUEEN MAE S. USIGAN
DR. INICIA BANSIG
A Detailed Lesson Plan
I. Objectives: At the end of the lesson, the pupils should be able to: Cognitive: determine if a noun shows possession
Affective: distinguish if a sentence shows joint or separate ownership Psychomotor: formulate sentences having joint and separate ownership
II. Subject Matter:
Topic: 1. Reading an article: “Magicians of the Flower World” 2. Language Integration: Possessive Nouns
Reference: Across Borders through Language 6, pages 14-23
By: Pacita M. Gahol and Eugenia Gorgon
Source of the article: Wonderful World of Knowledge
Materials: pictures, graphic organizer
Strategy/Approaches: Content Based Instruction, use of the strategy “Passport to Leave” and use of Graphic Organizer
1. PRE READING:
Good morning class!
Good morning Ma’am!
Have you been to a flower shop?
What did you notice about the flowers?
I have here an article entitled “Magicians of the Flower World.” But before we read it, I will post first the question you need to answer after reading the article. And we will also give meaning to the unfamiliar words that can be found in the article.
C. Motive Question:
After reading the article, you should be able to answer this question: * In what way can we protect our Mother Nature?
D. Unlocking of Difficulties.
Instructions: Match the underlined word in column A to column B.
1. I admire the works of the botanist.
2. The beautiful flowers that we admire in the florist’s window are very often the result of a great deal of work and study over a long period of time. 3. The new cultivated plant is called a hybrid.
4. Scientists call this new plant a “spontaneous mutation.” (The teacher will assign someone to answer the vocabulary terms.) 2. WHILE READING:
(The teacher will assign someone to read the article and she will repeat it after it has been read by the pupil chosen to read the article for better understanding.)
A. Studies flowers and plants
B. Plant that is different from the rest of its kind.
C. Person who deals in or grows flowers
D. Offspring of two plants or varieties of species
3. POST READING:
A. Answering of Motive Question:
Let’s go back to the question I posted to you a while back. In what way can we protect our Mother Nature?
(The teacher will choose some pupils to answer.)
B. Integration of Communication Skills:
Read the following expressions from the article:
You probably know already that these expressions show the possessive form of nouns.
(The pupils will read the expressions posted on the board.)
What do you know about Possessive Nouns? Yes Gael.
Definitely, from the word itself, possessive means possession or ownership. Nouns have different ways of ownership or possession.
Possessive Nouns show possession or ownership.
RULES REGARDING POSSESSIVE NOUNS:
1. Add ‘s to the singular form of a noun to show singular possession. Ex. lady’s fanLorna’s skirt
2. Add apostrophe (‘) only to plural nouns ending in –s. Ex. ladies shoesboys’ hats
3. Add ‘s if the plural form of the noun does not end in –s. Ex. men’s bagfishermen’s net
4. Add ‘s to Proper Nouns ending in –s or –z.
Ex. Carlos’s petMr. Cortez’s car
*However, some speakers prefer not to pronounce the s. Hence, they omit –s in the spelling. Ex. Moises’ shirtMercedes’ dress
5. Add ‘s to the end of a compound noun to form possession Singular possessionPlural possession
daughter-in-law’s housedaughters-in-law’s houses |
Energy for Teachers of Grades 6-8
Published in 2011, this comprehensive professional development course for grades 6–8 science teachers provides all the necessary ingredients for building a scientific way of thinking in teachers and students, focusing on science content, inquiry, and literacy.
Session 1: What is Energy
Every interaction involves energy, and the word itself is everywhere in our day-to-day lives — energy conservation, clean energy, and simply not having enough energy to wash the dishes after a long day — just to name a few. But what is energy? This session seeks to answer that question and explores the various kinds of energy that keep our world going.
Session 2: Potential Energy
All objects — big and small, hot and cold, moving and stationary — have potential energy. But the principle of having energy can lead to logical but incorrect assumptions of what this means. This session identifies the various types of potential energy and helps to clarify what it really means to have potential energy.
Session 3: Heat Energy
We instinctively all know something about heat just from our daily life. And we all know that global warming is a hot topic. But an understanding of the energy of heat is not exactly commonplace. This session explores the various ways in which heat energy is misunderstood and the ways in which scientists define and talk about heat energy, how it’s transferred, and how it affects our world.
Session 4: Conservation of Energy
The law of conservation of energy states that energy cannot be created or destroyed. It is always conserved. But how? And why do our experiences often lead us to believe otherwise — it sure seems like energy is lost when our morning coffee cools before we can drink it, children tire after a hard day at play, and our cell phone batteries die at the most inconvenient moments. This session provides a systematic explanation for how and why conservation of energy is possible.
Session 5: Energy in Ecosystems
All organisms, no matter where they are on the food chain, require a source of chemical potential energy to survive — food! Sounds simple, right? But do all organisms acquire food in the same way? And how do they harness the energy in that food? This session explores the complex interactions between food and organisms.
ENERGY COURSE OVERVIEW (PDF)
Download a printable version of the session-by-session course overview.
ENERGY MATERIALS LIST (PDF)
Download this complete list of all general supplies, hands-on materials, and printed materials used in this course.
As errors and omissions are found in course materials, errata sheets are created to correct them. |
American Government and Politics: Deliberation, Democracy, and Citizenship Chapter Thirteen Congress
Chapter Thirteen: Learning Objectives Explain the difference between the delegate and trustee theories of representation and how the members of the Congress were expected to combine elements of both Detail the most important differences between the House and Senate
Chapter Thirteen: Learning Objectives Describe the constitutional powers of Congress Explain the importance of political parties and committees to the structure and functioning of Congress
Chapter Thirteen: Learning Objectives Describe the process by which a bill becomes a law Detail the major functions of Congress and explain their importance Analyze the power of the reelection incentive to mold behavior in Congress
Chapter Thirteen: Learning Objectives Discuss the performance of Congress as a deliberative, representative, ethical, and accountable institution Evaluate the contribution of Congress to deliberative democracy in the United States
Introduction Congresspersons face many challenges and one of those is time. Many are concerned that Congresspersons do not have enough time to be deliberative in the legislative process.
Introduction Two theories of representation Delegate Trustee The framers combined elements of both theories in designing Congress.
Constitutional Structure and Powers What is the structure of Congress? What are the constitutional powers of Congress? What are the purposes of bicameralism and separation of powers?
Constitutional Structure and Powers Congress is bicameral , which means that it consists of two chambers – the House of Representatives and the Senate. By creating a bicameral legislature, the framers hoped to curb the branch’s dominance.
International Perspectives Bicameralism throughout the world The number of bicameral legislatures around the world is increasing. Many countries have found benefits to bicameralism as it may allow for better deliberation.
Constitutional Structure and Powers: The House and Senate House Number of representatives based on population Direct election by the citizens Two year terms Senate Each state has an equal number of senators Originally elected by state legislatures Six year terms
Constitutional Structure and Powers: The House and Senate House Work divided through committees Power found in leadership More centralized control Senate Members are generalists Power spread more evenly Less centralized control
Constitutional Structure and Powers: The House and Senate Pablo Martinez Monsivais/AP Photo
Constitutional Structure and Powers: Constitutional Powers Constitutional powers include Lawmaking Impeaching and removing public officials Expelling members
Constitutional Structure and Powers: Constitutional Powers Constitutional powers include Expelling members Ratifying treaties and confirming appointments Proposing constitutional amendments
Constitutional Structure and Powers: Congress and the Other Branches In what ways does Congress check the actions of the other branches of government? Why are these checks important?
Pledges and Promises The Congressional Oath and the PATRIOT Act Members of Congress take an oath before they begin service. During the debate over the PATRIOT Act, some congresspersons referred to that oath.
Congressional Organization What is the role of political parties in Congress? Should political parties have much influence in Congress? Why or why not?
Congressional Organization: Party Control The majority party in each chamber controls the legislative agenda and gets to select the chairmen of all committees and subcommittees. As Lee Hamilton has stated, “party status affects pretty much everything.”
Congressional Organization: Party Control Divided government occurs when the presidency and at least one chamber of Congress are controlled by different parties. What are some advantages or disadvantages to divided government?
Congressional Organization: Party Control Source: Office of the Clerk of the House of Representatives, "Party Divisions" at www.clerk .house.gov/art_history/house_history/partyDiv.html; U.S. Senate, "Party Divisions in the Senate" at www.senate.gov/pagelayout/histoyr/one_item_and_teasers/partydiv.htm.
Congressional Organization: Party Leaders House Speaker Majority Leader Minority Leader Whips Senate President President Pro-Tempore Majority/Minority Leaders Whips
Congressional Organization: Committees Types of committees Standing (often have subcommittees) Special (or select) Joint Conference
Congressional Organization: Congressional Staff Most congressional staff work for individual members of the House and Senate, although committees hire staffers. Does congressional staff help or hinder deliberation?
How a Bill Becomes a Law: Origins of Bills Congress follows a two-year cycle which begins during the January after a congressional election. Ideas for legislation come from many places including legislators, the White House, executive agencies, and interest groups.
How a Bill Becomes a Law: Origins of Bills Four forms of legislation Bills Companion bills Joint resolutions Concurrent and simple resolutions
How a Bill Becomes a Law: Committee Stage After a piece of legislation is introduced, it goes through the referral process. While in committee and subcommittee, legislation typically goes through markup . If a bill survives the committee stage, it will be ready to go to floor deliberation.
How a Bill Becomes a Law: Consideration by the Full Body In the House, the Rules Committee determines the rule of debate, such as open rules and closed rules . In the Senate, holds , filibuster , cloture , riders , and the unanimous consent agreement are used.
How a Bill Becomes a Law: Beyond the Floor To become law, bills must pass both chambers in the exact form. Presidential action includes Signing legislation Vetoing legislation (may be overridden) Exercising a pocket veto
The Functions of Congress Besides making laws, what does Congress do? Source: Norman J. Ornstein, Thomas E. Mann and Michael J. Malbin, Vital Statistics on Congress 2001-2002 (Washington, AEI Press, 2002); Resumes of Congressional Activity, at www.senate. gov/pagelayout/reference/two_column_table/Resumes.htm.
The Functions of Congress: Overseeing the Administration How does Congress provide oversight? Reviews agencies when executive branch wants to renew authority for programs Annual budget review assesses performance
The Functions of Congress: Overseeing the Administration How does Congress provide oversight? Standing committees oversee government operations How does the oversight function of Congress contribute to checks and balances?
The Functions of Congress: Educating the Public Congress conducts much of its business in the open, unlike the executive and judicial branches. Members of Congress work to reach the public in a variety of ways, including special order speeches , town hall meetings, and media interviews.
The Functions of Congress: Serving Constituents Two ways to serve constituents Casework Logrolling
The Functions of Congress: The Reelection Incentive Congresspersons must be reelected to accomplish their long-term goals. In general, to be reelected members of Congress need to communicate with constituents, raise money, and do what is in the best interest of their constituents.
The Functions of Congress: The Reelection Incentive Some scholars believe that in the quest for reelection, congresspersons may not deliberate about the broad public good, rather the good of their constituencies. Do you believe that deliberation about the public good is sacrificed for reelection?
Congress and Deliberative Democracy: Deliberation In order to serve the national interest, Congress must be deliberative. Through the existence of information gathering agencies and staff, it appears that Congress may be able to effectively deliberate.
Congress and Deliberative Democracy: Representation Congressional demographics do not represent the demographics of the American population. Do you believe that has an effect on deliberation? Should the demographics of Congress reflect the exact demographics of the population? Why or why not?
Congress and Deliberative Democracy: Ethics There is an expectation that congresspersons should be virtuous, which means that they should be devoted to the common good of society and should love justice. Corrupt behavior is actually atypical in Congress.
Congress and Deliberative Democracy: Accountability Congress is held accountable to the people through elections and the openness of its proceedings. With the emergence of the Internet, citizens may place more scrutiny on the activities of Congress.
Deliberation, Citizenship, and You Monitoring and influencing Congress There are many questions to consider when evaluating Congress. There are many ways for citizens to contribute to congressional deliberation.
Summary Congress is a deliberative institution Bicameralism promotes deliberation Congress has many functions Many ways to evaluate Congress’s contributions to a deliberative democracy |
Secrets of the Porcupine Quill Could Help Us Make Better Medical Supplies
Biomimicry FTW!We have alot to learn from nature. After all, evolution has been solving various problems long before we came around, and so by looking at the solutions it came up with (aka biomimicry), we can save a lot of R&D, and sometimes find clever ideas that we probably wouldn't have come up with on our own. One recent example of this comes from the porcupine, or more precisely, the porcupine quill. Each animal wears about 30,000 of them on their back to defend themselves against predators, and each quill has very interesting properties; everybody knows that they penetrate flesh very easily, but are very, very hard to remove, but until now, we didn't quite understand how they did this on a molecular level.
In a paper published in Proceedings of the National Academy of Sciences (PNAS), scientists explain what they discovered studying the quills (and how they were surprised nobody had done it before), and how they could be useful to us:
Herein we show that the natural quill’s geometry enables easy penetration and high tissue adhesion where the barbs specifically contribute to adhesion and unexpectedly, dramatically reduce the force required to penetrate tissue. Reduced penetration force is achieved by topography that appears to create stress concentrations along regions of the quill where the cross sectional diameter grows rapidly, facilitating cutting of the tissue.
Basically, it was counter-intuitive to the scientists that the features of the quills that make them hard to remove also make them penetrate flesh more easily. We might expect that a completely smooth spike would go in more easily, and that anything that makes the surface rougher would only get in the way, but with the porcupine quill that's not the case. They describe this property as: "this is the first demonstration of a highly engineered system that achieves polar-opposite dual functionality."
By reverse-engineering porcupine quills, we might be able to produce better medical supplies:
The dual functions of barbs were reproduced with replica molded synthetic polyurethane quills. These findings should serve as the basis for the development of bio-inspired devices such as tissue adhesives or needles, trocars, and vascular tunnelers where minimizing the penetration force is important to prevent collateral damage.
So maybe someday you'll go to the hospital and they'll use porcupine-inspired needles and bandages on you. That might seem like a small thing, but think of how many hundreds of millions of needles and bandages are used every day around the world. If we can make them better by copying nature, that's awesome! |
Videos are a great resource for language teaching and learning!
Students enjoy watching animated shows and videos on TV, on tablets, and on phones. Videos can motivate students to engage with language, so it’s easy to understand why teachers want to bring more videos into their English classrooms.
There are strong pedagogical reasons for including videos in your language teaching. Videos bring language alive. Students can see and hear language being used in context.
Animated videos are particularly accessible because they make it easy to focus on specific language, and can appeal to a wider age range of students than live-action videos. Animated videos are ideal for providing language models with enough context to support meaning, and enough humour to engage students. Research shows that students respond positively to familiar characters, so if you use videos with characters students can identify, they not only bring the language to life but may also make students want to interact with the characters they’re watching!
Even with all of these great reasons to include video in English class, many teachers don’t. Why not? Teachers tell us that it’s hard to find interesting videos that use the language their students are learning. They aren’t sure where to look for appropriate videos, and when they do know where to look, they don’t have time to search through the videos available in order to find one that will work with a specific lesson. Often, the videos won’t work because the language is too hard, or the video is too long or too fast-paced. Even if teachers are successful in finding a video they think could work with their lesson, they often aren’t sure how to make the best use of it for language learning.
One of the most important things teachers can do when using a video in class is to make the video content as interactive as the rest of their lesson. We know it’s important for students to talk to each other, to ask and answer questions, to use gestures and movement to reinforce meaning, and to use language in a meaningful way. We should use videos in the same way. There’s no reason to make video watching a passive experience in class.
Here are some ways to make video watching fun, interactive, and effective:
- Show the video without sound first. Then see what the students can remember about the video: body/hand movements and gestures, the situation and any words or phrases that they think are in the conversation.
- Play the video with sound. Have students listen for specific words or phrases, and do something (like raising a hand) when they hear the target language.
- Ask students a question before playing the video with sound. Have them listen for the answer.
- Have students take a role and act out the video.
We’re excited that the 5th edition of Let’s Go will include videos to help animate your teaching. The conversation videos show students how to extend the Let’s Talk dialogues. The song and chant videos make the language even more memorable and entertaining by adding a visual component.
Two of the new videos are available for you to try out in class.
Extended Conversation Videos
The conversation videos extend Let’s Talk dialogues by adding relevant language students already know and showing body language and gestures in context. Interestingly, if students look closely, they’ll see characters using gestures and facial expressions that may be different from what they usually do. During the video, one of the Let’s Go characters always turns to the students to ask a question, in order to make students part of every conversation.
The video from Level One Unit Six is available for you to watch.
Here’s the transcript so you can see how familiar language is used to extend the basic conversation. The original conversation is in black. The added language is in red. Blue highlights the question students will answer.
Jenny: Yes. Oh, hi Kate. How are you?
Kate: I’m great. How are you?
Jenny: I’m great, too. It’s so nice today.
Kate: How’s the weather?
Jenny: It’s sunny.
Kate: What was that?
Jenny: It’s rainy now.
Kate: How’s the weather today?
How could you use this in class?
- Show the video without sound, and ask students to tell you what the conversation is about.
- Play the video with sound. Have students listen and tell you what language they hear.
- Have students answer Kate’s question, and then ask each other the same question.
- Once students are comfortable with the language, have them watch without sound again, and tell you how Jenny is feeling based on her facial expressions
- Let students role-play the conversation in pairs.
Song and Chant Videos
The song and chant videos make lesson language visible and memorable! Combining rhythm, music, and images allow students to use three of their senses and increases the amount of language they’ll remember. “Where are the bugs?” from Level One Unit Six is available now.
How could you use this in class?
- Have students call out the names of objects they recognise in the video.
- Have students decide on gestures for on, in, under, and by (e.g., placing a fist on a palm for ‘on’,). Students do the gestures as they listen to the song.
- Have half of the class sing the questions and the other half answer. (Sing twice so everyone gets to ask and answer questions.)
Using videos that support your lessons can make the language more exciting, and real. The best videos for teaching language will reinforce the language you’re trying to teach. They’ll be short and will match your students’ pace.
Let’s Go fifth edition videos are all of these things – pedagogically sound, student tested, linguistically appropriate, short, understandable, and funny. Having the videos included with the coursebook units makes it easy to include them in your lessons.
Have fun animating your language teaching with Let’s Go!
Ritsuko Nakata, Karen Frazier, and Barbara Hoskins have spent 25 years working to improve the Let’s Go learning experience for teachers and their students. It is the only primary coursebook series that has had the same authors for all levels, resulting in a tightly controlled grammar syllabus that makes productive use of limited class time. |
Detroit - Cleveland
While strategic defense against airborne attack is a product of the 20th century, the concept can be found as early as the Enlightenment. In 1783, the Montgolfer brothers staged a demonstration of their balloon near Paris, France. The military significance of this demonstration was noted by a Prussian Lieutenant named J.C.G. Hayne who wrote that aerial warfare might make it possible for fleets of balloons to bombard fortifications and cities without impediment.
The military use of balloons spread very slowly, however. These early balloons were expensive, unreliable, could not be effectively maneuvered. Moreover, such balloons were capable of carrying only a very tiny payload. By the mid-19th century balloons were used primarily for limited observation of troops in the field. On August 31, 1861, Thaddeus Low deployed a balloon on behalf of Federal forces in northern Virginia. This balloon was fired on by gunners of a Louisiana artillery unit. An important military principle was demonstrated by this event: you need not hit the enemy - intimidation was an equally effective defense. While unable to strike the balloon, the gunners were none-the-less able to intimidate its occupants and the balloon was quickly lowered.
Later in the 20th century, the development of cheaper, lighter engines allowed for the development of powered balloons. Count Zeppelin built a fleet of dirigibles for the Imperial German forces. This fleet was deployed over London during World War I in an attempt to demolish the city. Attempts to thwart the dirigible threat included defensive measures such as anti-aircraft guns and offensive measures such as raiding German dirigible bases. Britains defensive and counter offensive measures effectively destroyed the German dirigible fleet.
Air defense became important again by the 1930s when it became clear in Europe that fascist countries were preparing for yet another war. Britain felt the necessity of developing large bombers capable of deterring a war with fascist countries while at the same time recognizing the need to defend against the attach of enemy bombers. By 1937, the United States Navy recognized the need to develop the ability to detect incoming bombers and offered Bell Laboratories a contract to research radar. When the United States entered World War II radar played an important part in United States defensive tactics. A comprehensive air defense apparatus was set up that included radar installations spotted along the east and west coasts of the United States. These radars were linked together by simple communications nets. Existing anti-aircraft guns were converted to fire control by radar.
It was also during World War II that the United States military began to experiment with rockets and Missiles. While anti-aircraft guns had been reasonably effective against air attacks on London, it was evident to the United States Army officials that the speeds and rates of maneuver of jet-propelled aircraft would quickly surpass the capabilities of ground fired shells. In 1944, Jake Schaefer, an ordinance officer in the United States Army formerly employed by Bell Laboratories, advocated the development of a surface-to-air missile. Schaefers ideas were presented in a paper he wrote for the Army in which he conceptualized a command guidance system composed of radar that would track the defending missile from its point of launch. The computer would calculate the place of impact and command the missile by radio to intercept the target. The Army called the system anti-aircraft Guided Missile (AAGM) until Colonel Trichel, director of advanced research for the Army, renamed the project NIKE for the Greek goddess of victory.
Air Defense of the United States in 1950 consisted of radar-directed 90mm and 120mm anti-aircraft guns placed in cities during World War II under the control of the National Guard. These guns were deployed around and in major cities and ports of the United States. New York and Washington had four battalions; Chicago had three battalions; Philadelphia, Detroit and San Francisco had Two; Boston, Baltimore, Pittsburgh, and Los Angeles had One. While little was done to actively provide strategic defense for the United States from 1945 to 1950, the invasion in 1950 of South Korea by North Korea with the aid of Soviet tanks and artillery spurred new concern for anti-aircraft research. In addition to the Korean War, the ability of the Soviet Union to attack the Continental United States over the North Pole or over the seas against either coast coupled with their demonstrated testing of the hydrogen bomb in 1949 spurred the United States Army to establish a nationwide defense system to protect against Soviet intercontinental ballistic missiles. The adversarial relationship between the Soviet Union and the United States became known as the Cold War and spurred the development and deployment of the NIKE system.
Beginning in 1953, NIKE was deployed first on the East and West Coast and then in the interior of the United States. More than 4000 missiles were installed. Many went into old anti-aircraft gun sites, however, the 25 mile range of the NIKE missiles allowed the batteries to be placed further from the potential targets. This allowed more time to shoot at the incoming bombers. Americas suburbs became sites of the NIKE Ajax, the first technological advance in the conversion of the United States air defense from artillery to guided missiles. Due to the extensive nature of the NIKE Ajax, the next generation missile, the Hercules, was designed to fit into the existing system. A bigger, more powerful missile with a longer range, the Hercules system was capable of being fitted with nuclear as well as conventional warheads. The Hercules system was first deployed in 1959 and by 1960, most Ajax missiles had been replaced by Hercules. Development of yet another NIKE missile system, the Zeus, began in 1958. Equipped with a more efficient radar system than either the Ajax or Hercules, the Zeus was never activated; however, many of the systems developed in its research were used in later anti-tactical ballistic missile systems. After 1960, technology developed relating to ballistic-missile defense (BDM) made the NIKE system obsolete, although it was not entirely decommissioned until 1974.
[ Home | Nike History Menu ]
This page was last updated on
© 1998 - 2013 T. Bateman |
It would be make for an impressive doomsday. One moment, here we are gardening or walking to lunch in the mellow light of a spring Sun, and the next moment said Sun has shredded Earth's atmosphere and twisted up our power grids, radio communications, GPS, and most anything else electrical. And it would all stay twisted for some time because what we're talking about isn't just a bad solar storm but a full-on superflare. Life on Earth would be jeopardized.
A superflare is a solar eruption expected to be up to 10,000 times more powerful than the largest observed solar storm of the modern era: the Carrington event. Said event, which took place in 1859, wrecked havoc on the worldwide telegraph system, and, according to ice core records from Greenland, caused significant damage to the planet's ozone layer. Nonetheless, the intensity of the Carrington event is just a tiny fraction of what astronomers now know some stars to be capable of.
Since the 2012 discovery of widespread superflare phenomena among distant stars, courtesy of data collected aboard the Kepler space observatory, astronomers have wondered if the same thing could happen right here in our Solar System. Now, researchers from Aarhus University in Denmark have reached the conclusion that, yes, the Sun is indeed capable of producing a superflare. While its magnetic field is generally much weaker than those stars most likely to produce a superflare, some lesser stars still do manage to produce destructive superflares.
The deeper question the researchers sought to answer is whether or not superflares are produced by the same mechanism as solar flares, which would indicate the possibility of superflares being produced by our Sun. We know that solar flares, sudden eruptions of magnetic energy, arise as the result of collapsing magnetic fields on the surfaces of stars—events triggered by one of several possibilities, including interactions from planets and other stars—so the group examined magnetic field data collected on some 100,000 stars via the Guo Shou Jing telescope in China.
Most of the superflare-producing stars had much stronger magnetic fields than the Sun, which indicates that a superflare here is unlikely. But there is still a minority of stars out there (about 10 percent) with comparable magnetic fields to our stellar neighbor, according to the current paper. This indicates that at the very least it is not impossible that a superflare could occur here.
"Based on activity measurements of 5,648 solar-like stars, including 48 superflare stars, we show that superflare stars are generally characterized by higher activity levels than other stars, including the Sun," the Aarhus group reports. "However, superflare stars with activity levels lower than, or comparable to, the Sun do exist, but none of the stars hosting the largest superflares show activity levels lower than the Sun."
The research supports an earlier theory that Earth has indeed been subject to a small superflare in its relatively recent history. This occurred in 775 AD and is evidenced by Japanese tree ring data. At the upper limit, this event would have been 10 to 100 times more powerful than the largest solar flare activity witnessed in the space age. This upper limit coincides nicely with the theoretical limit inferred from sunspot activity observed on the Sun. To produce a really big ass superflare, it would take a sunspot, a zone of magnetic field flux on a star's surface, at least 30 percent of the total radius of the star.
Given this sunspot limitation, it's reasonable to say that we're pretty safe for the time being. It would seem, however, that this safety is provisional rather than guaranteed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.