content
stringlengths 275
370k
|
---|
What is Selective Mutism?
Selective Mutism is an anxiety disorder characterized by not speaking outside the home to select individuals or in select settings, which continues for more than 1 month. Most commonly found in children, they understand spoken language and have the ability to speak but often are reluctant to speak in some settings, have a phobia of speaking and fear of people. Selective Mutism is related to severe anxiety, shyness, and social anxiety.
The first symptoms of Selective Mutism are usually noticeable between the ages of 1 to 3 years. However, it is usually not recognized until the child begins school and is requested to respond verbally and/or interact in social situations, including pre-school, elementary school, and community environments. Sometimes, even then, the child is viewed as shy and it is assumed that the shyness is temporary and will be outgrown. The cause has not been established. However, recent research suggests the possibility of genetic influence or vulnerability for Selective Mutism.
For those experiencing severe forms of Selective Mutism, immediate intervention is advisable because the symptoms can increase. Generally speaking, a younger child has a good chance of recovering, if treated, because of the shorter interval of time where no verbalization has occurred in school or in other major settings. Selective Mutism is not a speech disorder nor is it Autism. |
These problems are for college undergrads after a first course in calculus. They are provided with solutions, and could be used by college professors as exercises or exam questions.
1. Digits of Pi/4
Prove that in base b, if b is an even integer, n > 3, and x = Pi/4, then the n-th digit of x, denoted as a(n), is given by the formula below. We start with n = 1 after the decimal point, for the first digit. Also show that the formula below is not valid if the base b is an odd integer, or if x is different from Pi/4.
where the brackets represent the integer part (also called floor) function.
Regardless of the number x in [0, 1] and the base b, the n-th digit a(n) of x can be computed as follows:
See here for details. Thus we have
Using the angle difference formula for sinus, the fact that n > 3, b is an even integer, and x = Pi/4, it simplifies to
The result for a(n) follows immediately.
2. Continued Fractions and Nested Square Roots
Let us consider the two following expressions, assuming a is a strictly positive real number:
Prove that x is an integer if and only if a is the product of two consecutive integers. Prove that the same is true for y.
Let's focus on the first case. The second case is almost identical. The strictly positive number x must satisfy x^2 = a + x, thus x = (1 + SQRT(1 + 4a)) / 2. In order for x to be an integer, 1 + 4a must be a perfect odd square, which is possible only if a is the product of two consecutive integers. For instance,
Note that the expansion of the number x = 2 in the nested square root numeration system, when x tends to 2, has all its "digits" equal to a = 1 * 2. See this spreadsheet for details. More on this here. |
University of Nevada Las Vegas
MEG426/626 Manufacturing Processes
Department of Mechanical Engineering
Fall Semester 2000
Cutting Mechanics (I)
Orthogonal Cutting Model:
Orthogonal cutting uses a wedge-shaped tool in which the cutting edge is perpendicular to the direction of cutting speed.
As the tool is forced into the material, the chip is formed by shear deformation along a plane called the shear plane, which is oriented at an angle f with the surface of the work.
Chip thickness ratio(or chip ratio):
r = t0/tc … (1)
r = ls sinf /ls cos (f - a )
tanf = r cos a /[1 - r sin a ] … (2)
Primarydeformation zone – shear in the work material
Secondarydeformation zone – friction between chip and rake face
Tertiarydeformation zone – friction between machined surface and flank face
Discontinuous chip: when machining relatively brittle materials at low cutting speeds, the chips often form into separated segments.
Disadvantage: vibration, surface roughness, irregular surface texture, tool life.
Trend: large feed, large depth of cut, and high tool-chip friction promote the formation of this chip type.
Continuous chip: when machining ductile materials at high speeds and relatively small feeds and depths, long continuous chips are formed.
Disadvantage: damage machined surface.
Trend: high speeds and relatively small feeds and depths, and low tool-chip friction.
Build-up edge: when machining ductile materials at low to medium cutting speeds, friction between tool and chip tends to cause portions of the work material to adhere to the rake face of the tool near the cutting edge. This formation is called a build-up edge (BUE).
Characteristic: cyclical in nature, it forms, grows, and breaks off. Unstable.
Disadvantage: change tool geometry, cutting forces, cutting temperature and machined surface quality.
Forces in metal cutting
Forces in the secondary deformation zone:
The force between the tool and chip, which resisting the flow of the chip along the rake face of the tool.
The force which is normal to the friction force.
Friction coefficient: m = F/N
Friction angle: b
Resultant force: R
Forces in the first deformation zone:
The force which causes shear deformation to occur in the shear plane.
The force which is normal to the shear force.
Forces on the cutting tool:
The force in the direction of cutting, the same direction as the cutting speed v.
The force which is perpendicular to the cutting force.
F = Fc sin a + Ft cos a … (3)
N = Fc cos a - Ft sin a … (4)
Fs = Fc cos f - Ft sin f … (5)
Fn = Fc sin f + Ft cos f … (6)
t = Fs/ As … (7)
where: As = t0 w/ sinf … (8)
The Merchant Equation
t = Fs/ As … (7)
As = t0 w/ sin
Fs = Fc cosf - Ft sin f … (5)
Combine Eqs. (7), (8), and (5):
t= ( Fc cos f - Ft sin f ) / (t0 w/ sin f ) … (9)
Take derivative of the shear stress with respect to the shear angle and setting the derivative to zero, then we get Merchant Equation:
Assumption: shear strength of material is a constant unaffected by strain rate, temperature, and other factors.
Value:The Merchant equation defines the general relationship between rake angle, tool-chip friction, and shear plane angle.
Conclusions: (1) rake angle increases, shear angle increases;
Importance of increasing shear angle:
If all other factors remain the same, a higher shear angle results in a smaller shear plane area. Since the shear strength is applied across this area, the shear force required to form the chip will decrease when the shear plane area is decreased. This tends to make machining easier to perform, and also lower cutting energy and cutting temperature.
Created by Dr. Wang |
3.1. Introduction to Debugging¶
“The art of debugging is figuring out what you really told your program to do rather than what you thought you told it to do.” — Andrew Singer
This chapter will spend some time talking about what happens when errors occur as well as how to fix the errors that you will inevitably come across.
Before computers became digital, debugging could mean looking for insects impeding the functioning of physical relays as in this somewhat apocryphal tale about Admiral Grace Hopper, a pioneer of computer programming.
Nowadays, debugging doesn’t involve bug guts all over your computer but it can still be just as frustrating. To cope with this frustration, this chapter will present some strategies to help you understand why the program you wrote does not behave as intended.
Many people think debugging is some kind of punishment for not being smart enough to write code correctly the first time. But nobody does that, failure in programming is part of the deal. Here’s a fun video to keep in mind as you learn to program.
CC BY–NC–ND 4.0 International Ted.com
3.1.1. Learning Goals¶
To understand good programming strategies to avoid errors
To understand common kinds of exceptions and their likely causes
Given a piece of code identify the Syntax errors based on error messages
Given a piece of code find the (ValueError, TypeError, SyntaxError, ParseError, NameError) |
Many ancient cultures used the rising or setting sun to create calendars. Observe the setting sun once a week for as long as you can to come up with your own calendar.
What You Do:
- On the first day of your observation, watch the sun set and draw the horizon landmarks at the bottom of the page. The horizon landmarks would be things like trees, houses, streetlights and the horizon itself. For instance, if the horizon is hilly, you would draw that. Hold your pencil out at arm's length to help you estimate the distances between things.
- Use the compass to mark west, northwest and southwest on the page.
- Note on the horizon picture where the sun sets. Draw a small sun, and write the date and time inside it.
- A week later, watch the sun set from the same location. Mark the spot with the date and time on the horizon drawing you made.
- Continue observing the sunset once a week for as long as possible. Add extensions to your piece of paper if you have to.
- Look at your horizon calendar and guess where the sun will set in three days, three months and six months. Mark these spots on your calendar.
- Test your guesses on the days you marked by going outside and watching where the sun sets. Did it set where you thought it would?
- Figure out what time of year it is when the sun sets at the farthest point on the left side of your paper. What time of year is it when it sets on the right side of your paper? Does the sun seem to move faster through the sky during some parts of the year?
What Just Happened?
You just observed a phenomenon that people have been observing all over the world for a very long time. Many ancient cultures made horizon calendars so they could tell what season it was and which one was coming next. They used the calendar to tell them when to plant food, when to move to new camps and when to hold religious festivals.
Excerpted with permission from Out-of-This-World Astronomy by Joe Rhatigan and Rain Newcomb (Lark Books, 2003) |
Musical education is essential to your preschooler's creative and cognitive development. Children learn basics of rhythm and beat, as well as language development from early age musical instruction. This instruction can come in the form of piano lessons, music appreciation, fun fingerplays and songs, basic theory instruction, or song and dance. Here is an idea of what topics should be covered for musical instruction, keeping in mind that every program varies, depending on ages, location, etc.
Music Instruction Basics for Preschool Children ages Three and Four:
Three and four year olds will learn to sing familiar songs (like Yankee Doodle) and play musical games. The children will learn basic rhythm via simple instruments (like the drum) as they learn to make music. In reality they will discover a wonderful instrument built right into their bodies, thier voice, which they can use to sing melodies. Children at this age can learn to identify contrasting melodies, like high and low, or loud and soft sounds. They are also able to show much of the music with their bodies. Preschoolers of this age will enjoy being able to step the beat, or even hop, skip, and jog to it. They can also clap, snap, and pat rhythms.
Music Instruction Basics for Preschool Children ages Five and Six:
Children getting ready for Kindergarten will continue to use their voices for singing, chanting, and speaking. Combining music and movement, children will use their bodies for moving, clapping, and instrument playing. An emphasis is also on developing good listening skills, along with recognition of beats and tempo. Classroom and some orchestral instruments are identified for children ages five and six as well, both through music and and visual aids. Some students will begin to recognize upward and downward movement as well as repetitive phrases.
Singing familiar songs with your preschooler is a great way to involve your child in consistent music basics. Grab "The Top 30 Preschool Songs" by Kidzup
for your MP3 player and churn out the catchy tunes for your preschooler daily. You will soon see your child recognizing verses, chants and musical "rests" in the songs, as he or she becomes more familiar with them. The 30 Preschool Songs CD by Twin Sisters Productions is an inexpensive and fun cd to get for an at home or in class musical resource as well. Don't forget about some cool instruments, like the Melissa & Doug Deluxe Beginner Band Set. Above all, have fun exploring the world of music with your preschooler! |
by Roberto Lalli
Brian David Josephson
Nobel Prize in Physics 1973 "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects".
Brilliant Beginning in Theoretical Physics
Brian D. Josephson was born in Cardiff, the capital of Wales, on January 4, 1940, to Abraham Josephson and Mimi Weisbard. After having become interested in theoretical physics during his secondary education at Cardiff High School, in his undergraduate studies Josephson read the mathematical tripos at the eminent Trinity College, Cambridge University. After two years, however, he decided to switch to physics proper because he found the mathematical approach to physical problems too unrealistic. To his professors, Josephson appeared as an extremely brilliant, albeit shy, student. His ability to quickly master the novel developments in theoretical physics became evident when he wrote his first important paper in that field as an undergraduate.
In 1958, the German physicist Rudolf L. Mössbauer had observed a resonance of gamma rays in solid iridium. As opposed to x-rays, the emission and absorption of gamma rays had never been observed before that year. While x-rays are primarily emitted during electronic transitions, gamma rays are emitted in nuclear transitions, and previous attempts at observing nuclear resonance in gases had failed because the emission process causes the recoil of the emitting nucleus. As a consequence, the emitted gamma rays do not have sufficient energy to excite the target nuclei.
Not only had Mössbauer observed gamma ray resonances for the first time, but he had also put forward a mechanism that explained why gamma rays resonance occurred in solids, but not in gases. He attributed the effect to the fact that, under certain circumstances, the solid state prevents the nucleus from recoiling, thus preserving the entire energy for the emitted gamma rays. In Mössbauer’s view, the entire crystal to which the emitting nucleus was bound absorbed the recoil thereby reducing to a negligible portion the quantity of energy lost. The effect observed and explained by Mössbauer soon became a fundamental experimental tool for precise observations in physics and chemistry.
Published in 1960, Josephson’s first paper dealt with the theoretical understanding of the Mössbauer effect and especially with its application to the observation of the gravitational redshift—a well-known empirical implication of Einstein’s theory of general relativity. Josephson demonstrated that previous calculations had overlooked that extremely tiny temperature differences between the emitting and receiving nuclei could dramatically affect the observations of gravitational redshift done by means of the Mössbauer effect.
The Discovery of the Josephson Effect
After having earned his bachelor’s degree in 1960, Josephson decided to pursue graduate studies in experimental physics at the Royal Society’s Mond Laboratory of Cambridge University under the supervision of Brian Pippard—a British physicist who was making his name as a world expert in solid-state physics. Among other contributions, Pippard had dedicated many efforts to the understanding of the phenomenon of superconductivity. After World War II, Pippard performed various experiments on the properties of superconductors, especially concerning their response to microwave radiation. One of his experimental findings was that the penetration depth increased as the impurity increased. In order to theoretically explain this empirical fact, Pippard put forward a nonlocal reformulation of the London phenomenological theory of superconductivity. Pippard’s experimental and theoretical work was relevant to the theoretical advances that would eventually lead to the Bardeen-Cooper-Schrieffer theory of superconductivity (better known as BCS theory) in 1957. The BCS theory was the first microscopic theory that successfully derived the empirical phenomena related to superconductivity starting from the first principles of quantum mechanics. Experimental evidence gave strong support to the theory, but some theorists raised some doubts on the theoretical consistency of the theory. The most important of these criticisms focused on the apparent lack of gauge invariance of the BCS theory.
Pippard participated in the debate and closely followed its evolution. When Josephson began working with Pippard, the BCS theory was gaining momentum because of further experimental confirmation and a series of theoretical developments that had clarified the status of the BCS theory with respect to the theoretical problems raised by its critics. Among others, the American theoretical physicist Philip W. Anderson had been very involved in the clarification of the BCS theory, making important contributions to the resolution of the gauge invariance problem. Because of this and other important contributions to the theory of solids, Pippard invited him to join Cambridge University for a sabbatical year in 1961-62.
Under the influence of Pippard and, later, of Anderson’s lectures, the young Josephson became deeply interested in the phenomenon of superconductivity and in the BCS theory. He focused on the physical properties of a system made by two superconductors separated by a thin insulating layer—now called Josephson junction. The application of the BCS theory to this kind of systems led Josephson to discover unforeseen properties in 1962—an achievement that would gain him the Nobel Prize eleven years later.
Josephson got interested in the phenomenon of quantum tunnelling in solids after he became aware of recent observations made by I. Giaever that currents were able to flow from two metals (where either one or both of them were in the superconducting state) separated by a thin insulating film if the thickness of this film was sufficiently small.
Since 1926—when Schrödinger employed de Broglie’s work on the wave properties of particles to develop his wave equation to describe the change in time of a quantum system—quantum mechanics predicted that material particles could penetrate a thin potential barrier—a behaviour absolutely forbidden in classical physics. Empirical evidence led to a rapid acceptance of the novel quantum mechanics, and quantum tunnelling came to be considered as one of the physical effects that most strikingly differentiated quantum mechanics from Newtonian mechanics. While early confirmation of this effect emerged in the field of nuclear physics—when Gamow explained the emission of alpha rays as a quantum tunnelling effect that allowed the alpha particles to pass through the nuclear potential barrier—similar confirmation in the conduction of electrons in solids was much more difficult to achieve.
This state of affairs changed dramatically only in the last years of the 1950s. In 1957, the Japanese industrial physicist Leo Esaki provided convincing experimental evidence for electron tunnelling in semiconductors. After Esaki’s discovery was made public, the field of electron tunnelling in solids received much attention, often connected with its possible industrial applications. Two years later, the Norwegian physicist Ivar Giaever made a discovery that was even more influential for the development of Josephson’s own research project. Since the early 1950s, John Bardeen had been advocating that an energy gap appears in the passage from the normal to the superconducting state of metals. One of the main predictions of the successful BCS theory was the existence of such an energy gap. Giaever believed that the energy gap would have testable consequences on the tunnelling of electrons between metals separated by an insulating layer when one or both the metals where cooled down below their critical temperature. While in the tunnelling of electrons between two metals separated by an insulating film the current is proportional to the applied voltage, in a metal-insulator-superconductor junction the energy gap hypothesis implied that the current-voltage characteristic would be drastically altered. After having performed a series of experiments on quantum tunnelling in metals, Giaever decided to test the effect on the relationship between current and voltage implied by the BCS theory. With the support of many colleagues at the General Electric Laboratory, Giaever was able to successfully carry out his experiments employing lead and aluminium. The result was that as soon as one of the two metals (the lead) changed from the normal to the superconducting state, the current-voltage characteristic markedly changed in just the way expected by Giaever. As a second step, he also performed an experiment on two superconductors separated by a thin oxide layer. Even in this case, the result was as expected. Once the second metal also reached the superconducting state the resistance became negative.
Giaever’s results rapidly became one of the most interesting experimental novelties in the field of superconductivity. Besides being an accurate empirical confirmation of the BCS theory, Giaever’s discovery left many theoretical questions unexplained. These experiments, along with Anderson’s unpublished work on superconductive tunnelling and some particular features of the BCS theory, contributed to trigger Josephson’s intellectual curiosity resulting in the discovery of the Josephson effect. According to his recollections, Josephson was intrigued by Anderson’s pseudospinorial reformulation of the BCS theory, which made the spontaneous symmetry breaking of the theory manifest. One of the features of the spontaneous breakdown of symmetry in the BCS theory is that the wave function of the ground state of Cooper pairs has a definite phase in addition to an amplitude. Josephson got interested in the problem of whether this phase had observable consequences and, then, to find a way to empirically test the mechanism of spontaneous symmetry breaking in the superconducting state. The only way, Josephson’s reasoning went, to observe physical properties related to the phase in the BCS wave function was to look for phenomena related to the phase difference between two superconductors.
This train of thoughts led Josephson to focus on the recent experimental results obtained by Giaever. Josephson’s calculation showed that, besides the current observed by Giaever, one had to expect a weaker superconducting current due to tunnelling of Cooper pairs. Since the formation of Cooper pairs was the essential mechanism underlying the phenomenon of superconductivity, Josephson’s theory implied that supercurrents could flow through a barrier. Josephson’s calculations predicted two main effects. The first implication was that without applied potential one would observe the tunnelling of direct supercurrent, whose intensity was proportional to the phase difference between the Cooper pair functions in the two superconductors. The second one was that high frequency alternating supercurrents could flow through the barrier if a certain voltage was applied. The frequency of the oscillating supercurrents was independent of the properties of the superconductors and depended on universal constants through the formula 2eV/h (where e is the charge of the electron, h is the Planck constant, and V is the bias potential).
Josephson put forward his theory of superconducting tunnelling at the age of 22, when he was still a first-year graduate student. One of the problems of his theoretical calculations was that the effects predicted—the Josephson supercurrents—were so large that they should have been already observed in previous experiments performed by Giaever and others. Anderson, however, encouraged Josephson to publish his study, while Pippard suggested Josephson to perform the experiment on the tunnelling of supercurrents himself. Josephson followed both suggestions. The experiment he performed did not produce any positive results and ended up as a minor chapter of Josephson’s thesis. Josephson’s paper, however, was published in the newly established journal Physical Letters with the title “Possible New Effects in Superconductive Tunnelling” in 1962. Although very short, the paper contained the building blocks as well as the major predictions of Josephson’s theory of tunnelling of supercurrents in a superconductor-insulator-superconductor-system —an effect now universally known as Josephson effect.
The Rapid Acceptance of the Josephson Effect
Anderson reacted very positively to Josephson’s theoretical findings, but the majority of leaders of the solid-state community did not share Anderson’s enthusiasm. Some eminent theoretical physicists found very hard to understand Josephson’s theoretical formalism and its relationship with the hypothesised physical phenomena. Pippard himself believed that the simultaneous tunnelling of two electrons was virtually impossible on the grounds that even the tunnelling of single electrons was a very rare event. The eminent Nobel Laureate John Bardeen—who had headed the small theoretical team that had successfully developed the BCS theory—explicitly dismissed Josephson’s theory. In September 1962, Bardeen publicly challenged Josephson onthe reality of the effect at the Eighth International Conference on Low Temperature Physics in London. While Josephson argued that the mathematics of the BCS theory predicted the effect, Bardeen’s physical intuition led him to believe that electron pairing could not extend across the insulating barrier, however thin the insulating layer might be. The only way to solve the issue was by means of experimental tests.
As a matter of fact, a phenomenon attributable to the DC Josephson tunnelling had already been observed before Josephson elaborated his theory, in experiments performed by Giaever and, independently, by J. Nicol, S. Shapiro, and P. Smith. The effect, however, had been attributed to small breaches in the insulating layer due to the extreme thinness of the employed films. Anderson was soon enthusiastic about the work done by Josephson and began collaborating with the British experimental physicist John Rowell to test the DC Josephson effect at the AT&T Bell Telephone Laboratory.
In the meantime, Anderson was also building on Josephson’s work to make elaborate theoretical predictions, which had already been found by Josephson himself for his dissertation. This state of affairs resulted in a certain delay and confusion in the publication of the theoretical results. Josephson collected his own theoretical elaborations for an application to a research fellowship at Trinity College, but did not publish this work. This fellowship thesis contained the full account of the theory underlying the Josephson effect, as well as all its physical implications. Anderson received one of the few copies of Josephson’s work, which made him aware that Josephson had already published all the theoretical work Anderson had independently done. For this reason, Anderson decided not to publish his own research for he did not want to receive excessive credit for results that had already been obtained by the younger Josephson. Eventually, the full account of the Josephson effect was not easily available and several of the earlier findings even had to be re-discovered at a later period.
Anderson, however, successfully carried out with Rowell the experimental part of his research project on the DC Josephson effect, and in 1963 they published the first paper to explicitly claim for the empirical observation of the effect. By showing the dependence of the effect on the varying magnetic field, Anderson and Rowell proved that the effect could not depend on metallic shorts. A few months after Anderson and Rowell’s confirmation, the second prediction of Josephson’s theory—namely, the insurgence of AC supercurrent tunnelling—was also confirmed, although in an indirect way. Shapiro published his observations of constant current-voltage steps induced by the frequencies of microwave radiation.
Once these experiments provided convincing evidence for the reality of the Josephson effect, it was rapidly accepted by the majority of physicists working in the field of condensed matter. Remarkably, the empirical confirmation of the Josephson effect also became a confirmation of the validity of the BCS theory as well as of its physical implications. The Josephson effect became relevant as a tool in a variety of applications in physics and engineering such as, e.g., the SQUIDs (the Superconducting Quantum Interference Devices, which are extremely sensitive magnetometers) and the precise measurement of e/h ratio, as the frequency of AC Josephson supercurrents depends on this ratio. The great importance of Josephson’s breakthrough lies in the fact that a microscopic quantity (namely, the phase-dependent energy) had an observable influence on macroscopic variables.
In 1973, the year after Bardeen, L. Cooper and J. Schrieffer were jointly awarded the Nobel Prize in Physics for the discovery of the BCS theory, Josephson received one half of the Nobel Prize in Physics “for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects.” Leo Esaki and Ivar Giaever shared the other half of the 1973 Nobel Prize in Physics for their related research concerning the tunnelling phenomena in semiconductors and superconductors.
Theoretical physics and the mind-body problem
The Josephson effect had already been accepted as one of the most striking features of superconductivity and one of the most useful consequences of the BCS theory, when Josephson earned his PhD in 1964 with a dissertation entitled “Non-linear conduction in superconductors.” After his PhD, Josephson spent one year as visiting research professor at the University of Illinois. Apart from this experience in the USA, Josephson remained at Cambridge University all through his professional career till his retirement in 2007. In the 1960s, Josephson continued to do research on solid-state physics as a member of the Theory of Condensed matter group at the Cavendish Laboratory contributing to the theory of superconductivity and with a series of investigations on critical phase transitions.
After he received the Nobel Prize, Josephson became one of the few authoritative physicists to publicly legitimise paranormal phenomena as something worth of scientific investigation. In the late 1960s, Josephson got interested in parapsychology with the explicit aim of looking at the possible connections between the unintuitive physical implications of quantum mechanics and the possible physical interactions between mind and material world implied by allegedly observed paranormal phenomena. This unusual approach to theoretical physics probably developed in the early 1970s when he started practicing Transcendental Meditation—a meditation technique that was becoming very popular in that period. The winning of the Nobel Prize in 1973 suddenly contributed to an improvement of his working condition, leading to his promotion to professor of physics at Cambridge. It also provided Josephson more freedom to pursue his own interests. In the following years, Josephson tried to extend the range of action of theoretical physics in other fields usually unexplored in this kind of activity. The main idea developed from Bell’s theorem and its possible implications on consciousness as well as from one of the most striking features of quantum mechanics—according to which the observer influences the observations. For Josephson, it was meaningful to freely extrapolate from these properties of quantum mechanics in the attempt to understand those relationships between consciousness and the material world that go under the name of paranormal occurrences.
Although harshly criticized for his research in these fields—which was largely considered as wild speculation and an undue extension of the scientific range of action—Josephson remained firm in his conviction that the scientific community usually act on the basis of consensus and that paranormal phenomena deserved much more attention than modern scientists were disposed to give. In the 1970s, other physicists were developing a similar interest in the possible implications of quantum mechanics (especially Bell’s theorem and quantum entanglement) for some paranormal activities such as psychokinesis and clairvoyance. A group of physicists, some of them associated with the University of California at Berkeley, began investigating paranormal claims on physical grounds and actively participated in the research on parapsychology at the Stanford Research Institute. The Fundamental Fysiks Group—as they began to call their free association—organized, or was part of, several activities aimed at promoting a quantum mechanical approach to parapsychology. Josephson participated in some of these activities and also publicly defended this kind of approach against the views of the majority of his fellow physicists.
The contributions of Josephson to these kinds of research have been numerous and very controversial. He has tried to apply complex theoretical formalism to issues such as psycholinguistics, music, complex systems, and artificial intelligence. These interests led him to establish a research project entitled Mind-Matter Unification Project at Cavendish Laboratory in 1996. Since the 1970s, his work has been considered at the fringe of scientific research and his opposition to widely held views has led to strong public controversies with his colleagues and authoritative journals such as Nature, whose publication policies he strongly criticised. Josephson also defended scientific hypotheses and experimental findings, such as water memory and cold fusion, which were commonly considered as pseudo-scientific or, even worse, scientific hoaxes.
In view of his opinion on these kinds of issues, Josephson occupies an almost unique niche at the boundary between the authoritative status provided by his Nobel Prize—and his undeniable ability in theoretical physics—and a fringe status related to his more recent, although long-lasting, convictions. Up to this day, Josephson continues to enjoy his borderline position to promote his views and attack the scientific criteria often utilized to dismiss his own work as well as that of other scientists who do not have the support of their communities.
Anderson, P. W. (1970) How Josephson discovered his effect. Physics Today, 23, pp. 23-29.
Brian David Josephson (2009) HowStuffWorks.com. Retrieved on January 4, 2015.
Cooper, L. N., & Feldman, D. (eds.) (2011) BCS: 50 years. World Scientific, Singapore.
Hoddeson, L., Braun, E., Teichmann, J., & Weart, S. (eds.) (1991) Out of the Crystal Maze: Chapters from the History of Solid-State Physics. Oxford University Press, New York.
Interview of Philip Anderson with Alexei Kojevnikov. On Novemer 23, 1990. Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA. Retrieved on December 14 2014
Josephson, B. D. (1973) Nobel Lecture: The Discovery of Tunnelling Supercurrents. In Stig Lundqvist (eds.) (1992) Nobel Lectures, Physics 1971-1980. World Scientific Publishing Co., Singapore, pp. 157-164.
Kaiser, D. (2011). How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival. W. W. Norton & Company, New York.
Legget, A. J. (1995) Superfluids and Superconductors. In Brown, L., Pippard, B., & Pais, A. (Eds.). Twentieth Century Physics (Vol. 2). AIP, New York, pp. 913-966.
McDonald, D. G. (2001) The Nobel Laureate Versus the Graduate Student. Physics Today, July 2001, pp. 46–51.
Press Release: The 1973 Nobel Prize in Physics. Nobelprize.org. Nobel Media AB 2014. Retrieved 10 January 2015. http://www.nobelprize.org/nobel_prizes/physics/laureates/1973/press.html
Specify width: px
Specify width: px |
Developmental-Behavioral Screening and Surveillance
Early intervention can prevent school failure, reduce the need for expensive special education services, is associated with graduating from high school, avoiding teen pregnancy and violent crime, becoming employed when an adult, etc. Recent research from Head Start showed that for every $1 spent on early intervention, society as a whole saves $17.00. In the US, early intervention is guaranteed under the Individuals with Disabilities Education Act (IDEA) beginning at birth.
Because almost all children receive health care, primary care providers (e.g., nurses, family medicine physicians, and pediatricians) are charged by their various professional societies, by the Centers for Medicare and Medicaid Services, the Centers for Disease Control, and by IDEA to search for difficulties and make needed referrals. So what are the methods used to detect children with difficulties and how effective are they?
Screening tools are brief measures designed to sort those who probably have problems from those who do not. Screens are meant to be used on the asymptomatic and are not necessary when problems are obvious. Screens do not lead to a diagnosis but rather to a probability of a problem. The kind of problem that may exist is generally not defined by a screening test. The screens used in primary care are generally broad-band in nature, meaning that they tap a range of developmental domains, typically expressive and receptive language, fine and gross motor skills, self-help, social-emotional, and for older children pre-academic and academic skills. In contrast, narrow-band screens focus only on a single condition such mental health problems, and may parse via factor scores, the probability, for example of depression and anxiety, versus attention deficits, versus disorders of conduct. Typically, broad-band screens are used first and may be the only type of measure used to make referrals in primary care, referrals which are then followed up by in—depth or diagnostic testing and often with narrow-band screens used alongside them.
Screening measures require careful construction, research, and a high level of proof. High quality screens are ones that have been standardized (meaning administered in exactly the same way every time) on a large current (meaning in the last decade) nationally representative sample. Screens must be shown to be reliable (meaning that two different examiners get virtually the same results, and that measuring the same child over a short period of time, e.g., two weeks, returns nearly the same result). Screens must have proven validity, meaning that they are given alongside lengthier measures and found to have a strong relationship (usually via correlations). Validity studies should also view which problems are detected (e.g., movement disorders, language impairment, autism spectrum disorder, learning disabilities).
But the acid test of a quality screen, and what sets apart the psychometry of screens from any other type of test, is proof of accuracy. This means that test developers must show proof of sensitivity, i.e., the percentage of children with problems detected, and specificity, meaning the percentage of children without problems who are identified usually with passing or negative test results. The standards for sensitivity and specificity are 70% to 80% at any single administration. While this may seem low, development is a moving target and repeated screening is needed to identify all in need. This also means that even quality screens make errors but, one study of four different screens showed that over-referrals (meaning children who fail screens but who are not found to be eligible for services upon more in-depth testing) are children with psychosocial risk factors and below average performance. This is helpful information for marshalling non-special education services, such as Head Start, after-school tutoring, Boys and Girls Clubs, parent training, etc. for a description of quality measures and links to publishers. Screens are expensive to produce, translate, support, etc. and so all developmental screens are copyrighted products that much be purchased from publishers. However, most are inexpensive to deliver with time and material costs between $1.00 - $4.00 per visit.
Surveillance is the longitudinal process of getting “the big picture” of children’s lives and intervening in potential problems preferably before they develop. Surveillance includes eliciting and addressing parents’ concerns, and monitoring and addressing psychosocial risk factors that may deter development (e.g., limited parental education, more than 3 children in the home, single parenting, poverty, parental depression or other mental health problems, problematic parenting style such as not talking much with children, reading to them, etc.).
Surveillance involves the periodic use of broad-band developmental-behavioral screens but typically other kinds of measures are also deployed (preferably with quality tools enjoying psychometric support). Surveillance measures include tools eliciting and addressing parents’ concerns, measures of psychosocial risk, parenting style, autism spectrum disorder, mental health, etc. Some available measures offer both surveillance and screening via longitudinal tracking forms for monitoring issues and progress. A combination of surveillance and screening is recommended by the American Academy of Pediatrics in their July 2006 policy statement.
So Do Developmental Surveillance and Screening Work?
Studies on the effectiveness of early detection show that when quality screening tests are used routinely, early detection and early intervention enrollment rates rise to meet prevalence figures identified by the Centers for Disease Control (e.g., see The National Library of Medicine for supporting studies and an example of an effective initiative conducted by The Center for Health Care Strategies. But, in the absence of quality measurement, only about 1/4 of eligible children ages 0 – 3 years of age are detected and enrolled in early intervention. So why are detection rates typically so low?
Challenges to Early Detection in Primary Care
There are 8 major reasons why children with difficulties are not identified in primary care:
- The tendency to use informal milestones checklists. These lack criteria and items are not well- defined. For example, age-specific encounter forms typically used at well-child visits, may include an item such as “knows colors”. What does that mean? Must the child name colors? If so, how many? Does he or she have to point to colors when named? Or does he or she simply need to match them? The difference in skill levels required for each of these tasks ranges from about age 2 ½ to age 4 ½. Further, informal checklists lack psychometric scrutiny so we don’t have proof that asking about color knowledge is even a good predictor of developmental delays. In contrast quality screening tools use questions proven to predict developmental status and because such measures are standardized, the same task is presented the same way every time along with clear criteria for performance.
- Over-reliance on clinical observation without supporting measurement. Clinical judgment is helpful (e.g., for identifying pallor, clamminess, fussiness and other symptoms of illness) but development and developmental problems are usually far too subtle to simply observe. Most children with difficulties are not dysmorphic and so lack any visible physical differences from other children. Most walk and talk but how well they do these things requires careful measurement. We do not put a hand to a forehead to detect a fever. We measure. Development and behavior require measurement with quality instruments if we are to detect delays and disabilities
- Failing to measure at each well-visit. Development develops and developmental problems do to. A child may be normally developing at 9 months but will she be at 18 months if she is not using words? Or at 24 months if not combining words. We can’t predict outcomes very well (except when problems are severe). Repeated measurement and measurement with quality tools is essential.
- Difficulties communicating with families. Many parents don’t raise concerns about their children. Those with limited education often do not know that primary care providers are interested in development and behavior, child-rearing, etc. Many informal questions to parents do not work well. For example, “Do you have worries about your child’s development?” What is wrong with that question. The word “worries” is too strong and only about 50% of parents know what “development” means. Only about 2% of families will answer, even while the prevalence of problems in the 0 – 21 year age range is 16% - 18% (www.cdc.gov). In contrast, quality tools use questions proven to work and are far more likely to detect difficulties.
- Limited awareness of referral resources. Many children, even if administered a good screening tool, and found to have problematic results, are not referred? Why? Many primary care providers are unaware of referral resources in their communities. Why? Early interventionists have not consistently informed providers of their services. They many not respond like the ideal sub-specialist (e.g., calling back, informing about results, engaging in collaborative decision making about treatment, etc.). See www.DBPeds.org for links to referral resources.
- Failure to use a quality screening instrument. Unfortunately, the most famous and well known of screens, the Denver-II, lacks psychometric support. It under-identifies by about 50% or vastly over-refers depending on how questionable scores are handled. That it is also a hands-on measure taking longer to give than the usual 15 – 20 minute well-visit, means that most professionals use only selected items, and may thus further degrade what little accuracy there is. More accurate options and ones more workable for primary care in that they can be completed by parents in waiting or exam rooms, include Parents' Evaluation of Developmental Status (PEDS), Ages and Stages Questionnaire (ASQ) and PEDS:Developmental Milestones (PEDS:DM) with all three tools offering compliance with the tenants of both surveillance and screening. Practices with nurse practitions or developmental specialists, and early intervention intake services may have the time to administer accurate but lengthier measures that elicit skills directly from children (e.g., Brigance Screens (developed by Albert Brigance), Bayley Infant Neurodevelopmental Screener (BINS), or Battelle Developmental Inventory Screening Test (BDIST).
- Failing to monitor referral rates. Many providers are unaware of the prevalence of disabilities and delays and get little feedback when they’ve failed to identify a child with difficulties. Families often leave the practice or stop showing up for well-visits. So, there is an acute need to consider the prevalence of difficulties in light of personal referral rates: Overall about 1 in 6 children between 0 and 21 will need special assistance: about 4% of children 0 – 2, 8% of children 0 – 3, 12% of children 0 – 4, and 16% of children 0 – 8.
- Constraints of time and money. Many health care providers feel there is little time for screening during busy well visits. Generally this complaint reflects lack of awareness of screening measures that can be completed in waiting rooms (e.g., paper-pencil tools that families can self-administer independently, thus saving providers substantive time). Reimbursement for early detection has been notoriously poor. However in 2005 the Centers for Medicare and Medicaid Services enabled providers to add the -25 modifier to their preventive service code and to bill separately from the well-visit for 96110 (the developmental-behavioral screening code). Nationally, reimbursement now averages about $10. Some states have handled this mandate differently (e.g., North Carolina providers higher reimbursement for well care but does not allow screening to be unbundled from the well-visit for separate billing). Typically private payers honor Medicaid mandates and follow suit with billing and coding although this has not always occurred. The American Academy of Pediatrics has a Coding Hotline and advocates with private payers to provide reimbursement for screening.
The challenges of early detection in primary care are surmountable. But health care providers need to be better engaged by the early childhood community, trained in the use of tools that are accurate and effective in primary care, and reimbursed appropriately for their time. A number of model initiatives demonstrate that challenges of early detection are not insurmountable. Early detection initiatives that have encouraged greater contact between early childhood programs and primary care providers have greatly increased the likelihood of referral (see www.dbpeds.org for information on programs such as First Signs, ABCD, Pride, etc.)
- American Academy of Pediatrics
- The American Academy of Pediatrics’ Section on Developmental and Behavioral Pediatrics website
- the National Library of Medicine
- The Center for Health Care Strategies
- Parents' Evaluation of Developmental Status (PEDS)
- Ages and Stages Questionnaire (ASQ)
- PEDS:Developmental Milestones (PEDS:DM)
- Brigance Screens
- Bayley Infant Neurodevelopmental Screener (BINS)
- Battelle Developmental Inventory Screening Test (BDIST).
- Meade Movement Checklist(MMCL). |
Conditional probability and the product rule
If one is planning a picnic for the Fourth of July, one does not care what fraction of the days in the year it rains, but what fraction of the days in July it rains. For example, the probability of getting the jack of spades is 1/52, but if you know you are getting a black card, the probability becomes 1/26, if you know you are getting a jack the probability is 1/4, and if you know you are geting a black jack the probability is 1/2. Similarly, if you know you are getting a red card, the probability of getting the jack of spades is zero.
Formally we define the probability of A conditioned on B as P(A|B) = P(A and B)/P(B). The division on the right hand side assures that conditional probabilities sum to one as well as unconditional probilities. Especially with equally likely events, conditional probabilities can be interpreted as probabilities in a restricted universe. Hence the probability of getting the queen of spades conditioned on (or given that) you get a spade is P(Q and S)/P(S) = (1/52)/(1/4) = 1/13 by the formula, but can also be calculated as 1/13 as equally likely events in the universe restricted to spades.
Note that in general P(A|B) is not equal to P(B|A)
Exercise: P(heart|jack)=? P(jack|heart)=? P(one-eyed jack|heart)=? P(king|face card)=? P(face card|king)=?
From the definition of conditional probability, it is immediate that P(A and B) = P(A|B)P(B) = P(B|A)P(A). This is the product rule, e.g., P(king|heart) = 1/13, P(heart) = 1/4, therefore P(king and heart) = 1/13 × 1/4 = 1/52
Definition: A and B are said to be independent if P(A|B) = P(A); this means that conditioning on B gives you no further information. For example, knowing that one is a boy provides no further information as to whether one will get an A. Substituting this definiton into the product rule yields an alternative definition of independence: A and B are independent if P(A and B) = P(A) × P(B). One can readily verify that being a heart and being a jack are independent, but being a one-eyed jack and being a heart are not independent.
Competencies: If you roll a pair of dice, what is the probability that one die is a 5 if the total is 8? What is the probability that the total is 8 if one die is a five? Are one die being a 5 and the total being 8 independent events?
Reflection: What, if any, are the relationships between complementary, mutually exclusive, and independent.
Challenge:If P(A) = .4, P(B) = .5, and P(A or B) = .7, are A and B independent?
return to index |
Does consciousness—our awareness that we are perceiving something—arise from a special region in the brain, or from the coherent workings of multiple regions? Analyzing data from electrodes implanted in the brains of epilepsy patients, French researchers suggest the latter, although their results, published March 17 in the online journal PloS Biology, also point to a role for special, consciousness-related circuits in the prefrontal cortex.
Lionel Naccache, senior author on the paper and a researcher at Pitié-Salpêtrière, a teaching hospital in Paris, says he found that ordinary nonconscious visual perception was reflected in a quick sweep of activity from the primary visual cortex at the back of the brain to the prefrontal cortex. Such activity was liable to fade away just as quickly, but above a certain threshold, it evoked a sustained “long-distance coherent communication” between prefrontal areas and other areas of the cortex, a phenomenon that corresponded to conscious perception in his research participants.
“It’s a nice paper,” says Christof Koch, a neuroscientist and consciousness researcher at the California Institute of Technology. “If one can generalize this result [in other humans] and maybe do it in monkeys, it could be useful as a signature of consciousness. So it’s definitely a step forward.”
The phenomenon of consciousness lacks any basis in current standard theories of physics or biology. Cognitive scientists therefore have not had any objective method for measuring it and have had to content themselves with a search for its “neural correlates”—the specific neuronal activity that is necessary and sufficient for people to be aware of whatever their brain perceives. But even this quest has proved difficult.
To isolate these correlates, explains Gabriel Kreiman, a specialist on the neural workings of vision at Harvard Medical School, “we need to try to dissociate conscious from unconscious processing under situations where all the other variables remain as constant as possible.”
Such a task has never been easy. It requires a careful experimental design as well as fine-grained measurement of the geography of brain activity and its timing patterns—measurement that generally lies beyond the capacity of magnetic resonance and other noninvasive imaging technologies.
Yet a research team led by Naccache has now tried to meet these criteria, using electrodes implanted in participants’ brains and a procedure that can momentarily hide images from consciousness.
Surgeons implanted electrodes deep into the brain tissue of 10 epilepsy patients being prepared for surgery. The electrodes guided them to the precise brain region responsible for the patients’ seizures. Only in circumstances such as these is it considered ethical to make use of implanted brain electrodes for scientific purposes, but the findings based on such experiments are increasing.
Participants were shown a random sequence of images featuring either a word or a blank white space, each of which flashed on a screen for 29 milliseconds. For a randomly chosen selection of these images, the researchers flashed a “mask” of “&” symbols for 400 milliseconds after an image disappeared.
The masking technique is standard in perceptual research, yet as Naccache and his colleagues acknowledge, the presence of the mask was likely to complicate the data by activating brain areas on its own. However, by measuring the difference between the brain state induced by masked blank images and that induced by masked word images, the researchers were able, in principle, to isolate the state induced by word images. To confirm whether a participant had perceived an image, consciously or not, the researchers used words that were either “threatening” or “nonthreatening” and had participants press a button to indicate which of these came to mind.
Using the electrodes, the researchers eavesdropped on the brain’s electrical activity following each image presentation. Over several hundred trials for each participant, they were able to map what appeared to be key differences in brain-activity patterns between conscious and nonconscious word perception.
Essentially, they found that during nonconscious perception, activity occurred in multiple areas of the cortex, yet never became coherent—firing in sync—over large distances; this nonconscious activity also dissipated relatively quickly.
By contrast, during conscious perception the activity was able to “ignite” into much longer-term, self-reinforcing, interconnected activity across widely separated cortical areas. This coherent activity included areas of the prefrontal cortex and appeared to be concentrated in the “gamma wave” range of frequencies, which previous research has linked to attention and consciousness.
Naccache says that his group, which includes cognitive scientists at the French national research institute INSERM, now plans to do confirmatory experiments with a slightly different experimental design, using a technique known as “attentional blinking”—which does not weaken the intensity of the stimulus image in the way the masking technique does.
Other labs, including Kreiman’s, are working on similar experiments using a technique known as “binocular rivalry,” in which conscious perception is made to shift from an image in the left visual field to one in the right, or vice versa. “We are just at the very beginning of trying to formulate the relevant questions about consciousness,” says Kreiman.
Koch, too, notes that even when research is able to identify a region such as the prefrontal cortex as a key player in the circuits of consciousness, it needs to go further still: “We have to ask, what’s different about prefrontal areas? What’s the difference between prefrontal and parietal such that activity in prefrontal cortex can give rise to consciousness, but not activity in parietal cortex? There has to be some more abstract, general explanation for that.”
Among possible explanations, Koch tentatively favors the “Information Integration” theory of consciousness put forward by neuroscientist Giulio Tononi at the University of Wisconsin. Tononi proposes that consciousness is a fundamental property arising from any system that uses interdependent, information-exchanging parts. By this logic, the most powerful consciousness-generating networks of the brain would be those that integrate the largest amount of neural activity—as the results from Naccache and colleagues also suggest.
The theory implies, however, that consciousness is not limited to highly evolved animals or even to biological brains. As Koch puts it, “Whether it’s my iPhone or the flatworm C. elegans or the human brain, it would differ only in the amount of consciousness. But all would be conscious.” |
A diagram showing the elliptical orbits of some solar system objects.
Click on image for full size
Kepler's 1st Law: Orbits are Elliptical
With Tycho Brahe's observations in hand, Kepler set out to determine
if the paths of the planets against the background stars could be
described with a curve. By trial and error, he discovered that an
ellipse with the Sun at one focus could accurately describe the orbit
of a planet about the Sun.
Ellipses are described mainly by the length of their two axes. The
longest one is called the major axis, and the short one is the minor
axis. The ratio of these two lengths determines the
eccentricity (e) of the ellipse; it's a measure of
how elliptical it is. Circles have e=0, and very stretched-out
ellipses have an eccentricity nearly equal to 1.
It's important to note that planets, while they do move on ellipses,
have nearly circular orbits. Comets are a good example of objects in
our solar system that may have very elliptical orbits. Compare the
eccentricities of the objects in the diagram.
Once Kepler figured out that planets move around the Sun on
ellipses, he then discovered another interesting fact about
the speeds of planets as they go around
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Kepler's second law he again discovered by trial and error. After some experimentation, Kepler realized that the line connecting the planet and the Sun sweeps out equal area in equal time. Look at the...more
When one object is in orbit around another object, the orbit is usually an elliptical orbit. For example, all of the planets in our Solar System move around the Sun in elliptical orbits. An ellipse is...more
Astronomers have announced the discovery of a large new planetoid named Sedna. Mike Brown of the California Institute of Technology, Chad Trujillo of the Gemini Observatory in Hawaii, and David Rabinowitz...more
By 30,000 B.C,. Asian hunter-gatherers had crossed the Bering Strait into North America. These people were the first to inhabit this new land and so they are known as the Native Americans of North America....more
"The movements of the heavenly bodies are an admirable thing, well known and manifest to all peoples. There are no people, no matter how barbaric and primitive, that do not raise up their eyes, take note,...more
The stones of Carnac, France, are probably the most famous stones markings outside of those found at Stonehenge in England. Where Stonehenge is composed of standing stones, the Carnac area has many different...more
Not too far from Loch Ness, where the green highlands of Scotland rise and fall there lies three giant cairns of stones. They are called the Balnuaran of Clava. The Balnuaran of Clava, giant tombs encased...more |
What are the five elements of a story and why are they important?
Character: Strong, memorable characters help keep the character engaged with the story. Setting: Whether a writer spends a lot of time describing the setting or gives leaves most of it to the reader's imagination, setting helps a reader envision the world. Plot: Plot is purpose, without it what are the characters doing and what is there for the reader? Conflict: No matter the type of conflict this helps bring action to the story. It can also help readers further identify with or cheer for a character. Theme: The theme helps unify the story under a central idea.
Photography was only about 30 years old and had only been capturing the horrors of war for less than a decade when the Civil War broke out in 1861. What impact did this new technology have on the war?
Civil War photos brought the war even closer to home for many civilians. While some showed daily military life, many showed the destruction of war. Andrew Gardner and Mathew Brady were among the most prominent war photographers of the time. They arrived at battlefields and often rearranged dead soldiers so their photos elicited a stronger response. This helped show the great toll the war took. This meant even those farthest from the fighting could not forget what was happening.
What was Louis XIV's reasoning for transforming his father's hunting lodge into the Palace of Versailles?
Louis XIV was an ambitious man, intent on absolutism. He wanted everyone in France and elsewhere to acknowledge and respect his power. A type of feudalism was still being practiced in parts of France and although the king was considered to have a divine right to rule, some nobles weren't keen on relinquishing their revenue and power. Louis XIV saw this as a threat to his rule. He believed in a strong central government and that could only be achieved by squashing the nobility's potential to consolidate power. France, like much of Europe, was often under threat of war, either by its own doing or that of another country. At the time, France was a great military power and Louis XIV wanted his government to exude the same strength. Building the Palace of Versailles and centralizing his government there achieved both of these goals. Versailles was built to house government offices and residences; an important detail since Louis XIV required many nobles to live there at least part of the year and abide by strict court rules. |
The synchronous speed of an AC induction motor is the theoretical speed at which the motor should spin if it the induced magnetic field in the rotor perfectly followed the rotating magnetic field of the stator. Synchronous speed is measured in rotations per minute (RPM) and is given by the following formula:
120 * electric_frequency
RPM = --------------------------
Where the electric frequency in North America is 60Hz and the number of poles is typically 2 or 4. Frequency can be adjusted using a variable frequency drive. The 120 comes from 60 seconds per minute (converting cycles per second into cycles per minute) times 2 poles per electric cycle (poles come in pairs, so the number is always even). Thus two very common motor speeds are 1800RPM and 3600RPM. Increasing the number of poles makes a higher torque, lower speed motor.
However, to produce torque, an induction motor suffers from slip. Slip is the result of the induced field in the rotor windings lagging behind the rotating magnetic field in the stator windings. The energy lost in this discrepancy is what produces the useful work in an induction motor. Slip is expressed as a percentage of synchronous speed and is given by the following formula:
(synchronous_speed - actual_speed)
S = ------------------------------------ * 100%
Typical slip values at full load torque range from 1% (for large 100 HP motors) to 5% (for small 1/2 HP motors). Slip is not a concern in most applications, unless precise speed control is required. One solution is to use a variable frequency drive controlled by a feedback encoder to keep the motor at a specific speed. Another solution is to use a synchronous motor. These motors magnetize the rotor in one of a variety of ways, which keeps the rotor locked in step with the rotating magnetic field of the stator, eliminating slip. |
Consider the semipalmated sandpiper, a wee little thing that seldom reaches more than 6 inches from head to tail.
Small in stature, but large in stamina, these birds breed in the Arctic and winter along the coasts of South America, often after nonstop oceanic flights of up to 2,500 miles from local shores. They are among the most abundant of small shorebirds.
Still, their numbers are dwindling — by as much as 80 percent over 30 years in a recent bird count in northern Atlantic states — and scientists are trying to determine the cause.
“We’re not really sure why their numbers are down,” said Nancy Pau, the wildlife biologist at the U.S. Fish and Wildlife Service’s Parker River National Wildlife Refuge on Plum Island. “A lot of the shorebirds are in decline. There are many theories. It could be habitat loss both on their wintering grounds and along their migration belt or perhaps a predator at their breeding or wintering grounds.”
Many of the birds come through the East Coast, spending time on the beaches and salt marshes of Cape Ann and the North Shore during their fall migration to the south. Their migratory flyway often brings them into contact with development that has shrunk their preferred pit stops and feeding grounds.
“They use beaches, and they use wetlands a lot,” Pau said. Those are areas that have changed significantly in the last 200 years, Pau said.
Pau is part of an international project that could provide some of the answers to questions of what’s happening with the sandpipers.
Utilizing advanced technology, biologists in the United States and Canada have developed a system to track where the birds go, what path they fly and how long it takes them.
The system employs tiny radio-telemetry tags — so tiny that they also are used to track dragonflies — that are glued to the birds’ feathers. The nano-tags, each with a unique code, emit a signal on a prescribed frequency that allows researchers to track the birds more efficiently. |
To improve children’s oral health and keep them active in the classroom, education for parents may be the first step. From the early 1970s to the 1990s, the amount of cavities in the baby teeth of children ages 2 to 11 declined, according to the National Institute of Dental and Craniofacial Research. However, in their latest study, that trend flipped. A small yet significant rise in tooth decay showed that 42 percent of kids have some form of cavity or dental caries. That’s about 21 million American children.
Education starts at home, where parents are lifelong teachers. Since day one, we learn from what our parents do, how they treat others and how they take care of themselves. You are your kids’ learning models. The attitudes you maintain about oral health inspire theirs and can steer them to live a healthy and balanced lifestyle. Even if your kid seems to rebel against you sometimes, little Johnny or Sara will take after you more than you realize.
After all, tooth decay in primary teeth has hefty implications on dental health later in life.
“We do know from a number of studies that when children have tooth decay in their baby teeth, they tend to have decay later in their adult teeth,” lead researcher Bruce Dye of the National Center of Health Statistics at the Centers for Disease Control and Prevention told ABC News.
Encourage your children to eat nutritious meals and avoid frequent snacking. If you pack his or her lunch for school, make sure to throw in an apple, banana or some other fruit. Teach them from a young age to develop good habits for flossing and brushing. Dentists recommend that adults and kids floss once a day. Interestingly enough, it has been shown that flossing before brushing is more likely to develop into a habit. Why? Often after we finish with the toothbrush we feel like our mouths are sufficiently clean, so we postpone using the thread until tomorrow … or sometimes next month. Always floss before brushing! |
Radio-frequency identification (RFID) is the use of an object/ person (typically referred to as an RFID tag) applied to or incorporated into a product, an object, or onto a person for the purpose of identification and tracking using radio waves. Some tags can be read from several meters away and beyond the line of sight of the reader.
Most RFID tags contain at least two parts. One is an integrated circuit for storing and processing information, modulating and demodulating a radio-frequency (RF) signal, and other specialized functions. The second is an antenna for receiving and transmitting the signal.
The are generally two types of RFID tags: active RFID tags, which contain a battery and can transmit signals autonomously, and passive RFID tags, which have no battery and require an external source to provoke signal transmission. |
While emissions reductions from the energy and transport sectors and the role of forests and soils as terrestrial carbon sinks have been at the centre of climate change discussions, the critical role of the oceans, and its wetland ecosystems in carbon capture and storage has been largely overlooked.
Our oceans play a significant role in the global carbon cycle. They are not only the world’s largest long term carbon sinks, but our oceans store and cycle 93% of the earth’s CO2. The world’s most crucial climate-combating wetland ecosystems, mangroves, saltmarshes, seagrasses, known collectively as Blue Carbon sinks, and estuaries capture and store the equivalent of up to half of the carbon emissions from the entire global transport sector every year, estimated at around 1 billion metric tons of carbon each year. Yet we are degrading these wetland ecosystems at a rapid pace from urban expansion, coastal development and through inappropriate catchment management practices.
Over the next 20 years, protecting, conserving and rehabilitating these precious Blue Carbon sinks globally would equate to 10% of the reductions needed to keep the amount of CO2 in the atmosphere at safe levels below 450ppm and would offset 3-7% of current fossil fuel emissions1 – over half of that projected for reducing rainforest deforestation. As for forests, maintaining or improving the ability of the oceans and its wetland ecosystems to absorb and bury CO2 is a crucial aspect of climate change mitigation for Australia.
“Science is now also telling us that we need to urgently address the question of ‘blue’ carbon. An estimated 50% of the carbon in the atmosphere that becomes bound or ‘sequestered’ in natural systems is cycled into the seas and oceans – another example of nature’s ingenuity for ‘carbon capture and storage’. However, as with forests we are rapidly turning that blue carbon into brown carbon by clearing and damaging the very marine ecosystems that are absorbing and storing greenhouse gases in the first place. This in turn will accelerate climate change, putting at risk communities including coastal ones along with other economically important assets such as coral reefs, freshwater systems and marine biodiversity as well as ‘hard’ infrastructure from ports to power stations. Targeted investments in the sustainable management of coastal and marine ecosystems – the natural infrastructure – alongside the rehabilitation and restoration of damaged and degraded ones, could prove a very wise transaction with inordinate returns.”
Achim Steiner UN Under-Secretary General and Executive Director, UNEP 2009
Mainstreaming a Blue Carbon agenda
Globally, functioning coastal ecosystems are ranked among the most economically viable of all ecosystems and are estimated to be worth over $US 25,000 billion annually. These Blue Carbon sinks play a crucial role in maintaining climate, health, food security and economic development across coastal Australia. They also provide food, shelter and nursery areas for around 70% of the fish species we eat, amazing animals like seahorses and our native wildlife including the Orange-Bellied Parrot and the Water Mouse. WetlandCare Australia is working with our partners to improve the integrated management and connectivity of our coastal and marine environments, including the protection, conservation and rehabilitation of our Blue Carbon sinks. Together we are working to have these wetland ecosystems recognised in voluntary carbon and compliance schemes and to ensure a holistic ecosystem approach is adopted by governments that will not only reduce and mitigate the effects of climate change, but increase Australia’s food security, benefit health and productivity and help protect fragile coastal areas - a win-win mitigation strategy!
WetlandCare Australia’s 2015 conservation goals for Blue Carbon in Australia:
- Development of a national policy and framework for conserving, managing and rehabilitating Australia’s Blue Carbon sinks for carbon storage
- Rehabilitation of 20,000ha of priority Blue Carbon sinks across coastal catchments of Australia
- Establishment of at least 3 interdisciplinary research projects to improve our understanding of the role of Australian Blue Carbon sinks in carbon storage across tropical, sub tropical and temperate climates, including a series of pilot projects that build carbon storage measurements associated with ecosystem rehabilitation at priority locations
- Inclusion of coastal wetlands in the Voluntary Carbon Market and any national compliance scheme, as well as development of a national Blue Carbon Fund for the protection and management of coastal wetland carbon storage
- Completion of a feasibility study and pilot project to explore opportunities arising from Blue Carbon sinks and Voluntary Carbon Markets for indigenous and remote communities.
- A national approach to measuring and managing cumulative risk associated with urban and industry expansion pressures on Blue Carbon sink carbon storage
OUTCOME: As a mainstream component of Australia’s climate change mitigation and adaptation strategies Blue Carbon is an important factor in improving the health of critical coastal ecosystems that sustain ocean biodiversity (mangroves, saltmarsh and seagrass). |
Supermassive Black Holes Are 10 Billion Times More Massive Than the Sun
The next time you take a trip into deep space, make sure that you avoid the galaxies of NGC 3842 and NGC 4889 because you just might get yourself sucked into one of the two largest black holes in the known universe.
Recently discovered by an international team of astronomers, these two black holes form the center of the galaxies NGC 3842 and NGC 4889, and are located over 300 million light years from Earth. They're at least as heavy as 10 billion Suns, according to the University of California, Berkeley, and threaten to consume anything and everything within an area that's five times the size of our own solar system.
What's most fascinating to scientists isn't just that these massive beasts are large, but they also may be the remnants of quasars--galactic nuclei that typically surround a black hole. Quasars are perhaps the most luminous, powerful, and energetic objects in the universe, and are often found at the centers of young galaxies like NGC 4261, 3C 273 (the brightest galaxy to appear in the Earth's sky), and perhaps even our own Milky Way Galaxy.
Basically, at the center of a typical young galaxy is a small black hole surrounded by a quasar, which is itself powered by the black hole. As the black hole sucks in more and more matter, it grows increasingly powerful and more massive, and it eventually sucks in the quasar that once surrounded it.
Interestingly enough, some quasars are so massive that they have jets of gas shooting out of them that extend many light years into space. These jets are formed from matter that gets torn apart as it approaches the black hole and releases some matter and energy. According to the Department of Physics & Astronomy at the University of Tennessee, the galaxy NGC 4261 has a jet that stretches 88 thousand light years from its black hole.
These two supermassive black holes probably used to have massive quasars just like NGC 4261, but the black holes went unnoticed for so long because they have sucked up everything around them. The massive galaxies that orbit them are well out of the black hole's event horizons--the point of no return that not even light can escape.
According to UC Berkeley graduate student Nicholas McConnell, these black holes have an event horizon 200 times larger than the Earth's orbit, and their a gravitational influence is so strong that it affects objects within a 4,000-light-year diameter. The researchers say that these black holes are 2,500 times as massive as the black hole at the center of the Milky Way Galaxy.
The search for these supermassive black holes was conducted based on the results of computer simulations UC Berkeley astronomy professor Chung-Pei Ma that dealt with galaxy mergers. As you can imagine, these supermassive black holes were found in equally supermassive galaxies that may contain as many as one trillion stars.
Like this? You might also enjoy...
- Cells Race End to End In a Petri Dish
- Scientists Plan to Clone a Real, Living Woolly Mammoth in Five Years (Again)
- Scientists Discover Huge Caches of Ice Buried on Mars |
The orthography of a language specifies a standardized way of using a specific writing system (script) to write the language. In languages that use multiple writing systems, more than one orthography can exist, as in Kurdish, Uyghur, Serbian, Inuktitut or Turkish. Orthography is distinct from typography.
Orthography generally refers to spelling; that is, the relationship between phonemes and graphemes in a language. Sometimes spelling is considered only part of orthography, with other elements including hyphenation, capitalization, word breaks, emphasis, and punctuation. Orthography thus describes or defines the set of symbols (graphemes and diacritics) used in a language, and the rules about how to write these symbols.
Most natural languages developed as oral languages, and writing systems have usually been crafted or adapted afterwards as representations of the spoken language. In an etic sense, the rules for writing systems are arbitrary, which is to say that any set of rules could be considered "correct" if the users of the language mutually agreed to convene upon that set of rules as the standard way to represent the spoken language. However, as standardization takes stronger hold, an emic epistemology of "right and wrong" develops, in which compliance with, or violations of, the standards are viewed as right, or wrong, in a way analogous to moral right and wrong, and in which each word has a written identity that is no less standardized than its oral-aural identity, which is emically unitary. The term orthography is sometimes used in a linguistic sense to refer to any method of writing a language, without judgment as to right and wrong, with a scientific understanding that orthographic standardization exists on a spectrum of strength of convention. But the original sense of the word stem, which evolved long before linguistic science, implies a dichotomy of correct and incorrect, and the word stem is still most often used to refer not just to a way of writing a language but more specifically to the thoroughly standardized (emically "correct") way of writing it.
Letters or words cited as examples of orthography are placed between angle brackets: ⟨a⟩. This contrasts with phonemic transcription, which is placed between slashes, and phonetic transcription, which is placed between square brackets: /a/, [a].
An orthography may be described as "efficient" if it has one grapheme per phoneme (distinctive speech sound) and vice versa. An orthography may also have varying degrees of efficiency for reading or writing. For example, diverse letter, digraph, and diacritic shapes contribute to diverse word shapes, which aid fluent reading, while heavy use of apostrophes or diacritics makes writing slow, and the use of symbols not found on standard keyboards makes computer or cell phone input awkward.
See main article: Phonemic orthography. A phonemic orthography is an orthography that has a dedicated symbol or sequence of symbols for each phoneme (distinctive speech sound) and vice versa, that is, graphemes and phonemes are bijective functions of one another. Russian, Spanish and Italian are close to being phonemic, and English is among the least phonemic.
A morpho-phonemic orthography considers not only what is phonemic, as above, but also the underlying structure of the words. For example, in English, and are distinct phonemes, so in a phonemic orthography the plurals of cat and dog would be cats and dogz. However, English orthography recognizes that the sound in cats and the sound in dogs are the same element (archiphoneme), automatically pronounced differently depending on its environment, and therefore writes them the same despite their differing pronunciation.
Korean hangul has changed over the centuries from a highly phonemic to a largely morpho-phonemic orthography, and there are moves in Turkey to make that script more morpho-phonemic as well. Japanese kana are almost completely phonemic, but have a few morpho-phonemic aspects, notably in the use of ぢ di and づ du (rather than じ ji and ず zu, their pronunciation in standard Tokyo dialect), when the character is a voicing of an underlying ち or つ – see rendaku.
Another group of language which experiences a high rate of morpho-phonemic changes is the Austronesian languages. Oftentimes, this causes problems for foreigners who are trying to learn Philippine languages like Tagalog, Cebuano, Ilocano and others. It is also the same problem of people learning Bahasa Melayu and Bahasa Indonesia.
See main article: Orthographic depth. A "deep" orthography is one in which there is not a one-to-one correspondence between the letters and the phonemes in the language, such as that of English. Most languages of western Europe (which are written with the Latin alphabet), as well as the modern Greek language to a lesser extent (written with the Greek alphabet), have deep orthographies. In some of these, there are sounds with more than one possible spelling, usually for etymological or morpho-phonemic reasons (like in English, which can be written with ⟨j⟩, ⟨g⟩, ⟨dg⟩, ⟨dge⟩, or ⟨ge⟩). In other cases, there are not enough letters in the alphabet to represent all phonemes. The remaining ones must then be represented by using such devices as diacritics, digraphs that reuse letters with different values (like ⟨th⟩ in English, whose sound value is normally not), or simply inferred from the context (for example the short vowels in abjads like the Arabic and Hebrew alphabets, which are normally left unwritten). The syllabary systems of Japanese (hiragana and katakana) are examples of almost perfectly shallow orthography – exceptions include the use ぢ and づ (discussed above) and the use of は, を, and へ to represent the sounds わ, お, and え, as relics of historical kana usage.
Another term to describe this characteristic is "defective orthography". This term, however, clearly implies the superiority of shallow orthographies—a point that advocates of morphophonemic writing would dispute. Using the terms "deep" and "shallow" is therefore more neutral in relation to the question of what types of orthography are superior.
Complex orthographies often combine different types of scripts and/or utilize many different complex punctuation rules. Some widely accepted examples of languages with complex orthographies include Thai, Chinese, Japanese, and Khmer. Orthography aids are available as a type of foreign language writing aid to assist learners in their writing in these languages. |
Music has always been a part of Chinese culture, from the ancient sounds echoing from the Dynastic Era to the modern pop hits that dominate the airwaves. In Chinese mythology, the creator of music was named Ling Lun and built an instrument out of bamboo pipes that imitated the different tones produced by local birds. The existence of several documents from the Zhou Dynasty speak of a well established musical tradition even at this early time. In these centuries, there was an Imperial Music Bureau that was responsible for evaluating the various court, military, and folk music to determine which types should be considered to be representative of the Chinese culture. Many of the folk songs were revered by the rulers of the time and written collections exist that document many of the popular tunes.
While the tradition of Chinese music remained largely unchanged in terms of style, the opening of trade routes saw many new instruments being discovered and incorporated into the famous folk tunes. However, at the turn of the twentieth century, a new musical shift was poised to happen. Many Chinese musicians began to study in foreign countries and returned with the classical music that was popular in many European Countries. This music inspired what is called the Republic of China era and was often a source of controversy as the new sounds were adopted to the Chinese ears. In most of the major Chinese cities, a symphony orchestra was created and the growth of radio broadcasting allowed many of the Western sounds to become familiar to home listeners. In addition to classical influences, many of the musicians started to also add jazz stylings to the traditional Chinese songs.
The Communist take over of China led to a new role of music in society. As all of the new media platforms were controlled by political interests, there were a great deal of limitations on what was considered to be acceptable music. However, after the Tiananmen Square events in 1989, new music began to explode in China, with attention being paid to both the pop and rock genres. However, the rock music movement was largely subdued by the political powers and is still a smaller underground music scene than the pop songs that fill the airwaves. The majority of modern Chinese music is made in either Beijing or Shanghai and the piracy problems of the nation lead most albums to be released in Hong Kong or Taiwan before they are released domestically. To experience the music live, there are two main music festivals in China, the Midi Modern Music Festival and the Snow Mountain Music Festival. Both celebrate the current hits of the year and are outdoor events that draw very large domestic and international crowds.
Chinese people, like anywhere else in the world, like to buy clothes, travel abroad, upgrade their computers, and attend Chinese rock concerts. They have come a long way since the origins of Chinese Rock, the spark that changed Chinese music forever. |
Among the most important of the biological molecules are the carbohydrates. Like the previous biomolecules studied in this course, carbohydrates are a large reason for life as we know it. Carbohydrates provide a source of energy for organisms and also play structural role in some creatures. These molecules, also known as saccharides, are constituted of the empirical formula (CH2O)n. Like previous biomolecules studied, such as the proteins and nucleic acids, individual units are important, as are the polymers of these units. In this page I will summarize carbohydrates, giving a framework of understanding for the remainder of this project. I will begin by defining some commonly used terms. (Definitions from Biochemistry, Mathews and van Holde.)
Aldose: a monosaccharide aldehyde.
Diastereoisomers: molecules that are stereoisomers but not enantiomers. Isomers that differ in configuration about two or more asymmetric carbon atoms and are not complete mirror images. Have one or more chiral carbons. (Remember: a molecule with n chiral centers has 2n stereoisomers.)
Enantiomers: stereoisomers that are nonsuperimposable mirror images of each other. Also known as optical isomers, based on the fact that the enantiomers of a compound rotate polarized light in opposite directions.
Ketose: a monosaccharide ketone.
Metabolism:The totality of the chemical reactions that occur in an organism.
Monosaccharides: the simplest carbohydrates; small; monomeric. Includes glucose.
Oligosaccharides: A carbohydrate composed of a few monomer units. Includes maltose, a disaccharide (two glucose units).
Photosynthesis: The process by which energy from light is captured and used to drive the synthesis of carbohydrates from CO2 and H2O. The process of photosynthesis is essential for all living things in the world, and plants are the only food-producers, while the other animals either feed on plants or feed on other animals. It is an oxidation-reduction reaction. The overall reaction: (CO2(g))x + (H2O(l))y + light energy ------> Cx(H2O)y (aq) + O2 x (g)
Polysaccharides: Polymers of monosaccharides, can be quite complex. Includes amylose (starch).
Tautomer: Structural isomers that have different location of their hydrogen atoms and double bonds.
The smallest monosaccharides we will discuss here have n=3, the trioses. There are two trioses, which are also tautomers - glyceraldehyde (an aldose) and dihydroxyacetone (a ketose). These two molecules are capable of interconversion through an enediol intermediate. Glyceraldehyde exists in enantiomer (D & L) forms, because it has a chiral carbon (second carbon). The D form predominates in nature.
Tetroses have n = 4, and have two chiral carbons in the aldose form. Because there are 2n stereoisomers for every chiral carbon, there are four stereoisomers for every aldotetrose.
For these molecules (with n=5 and n=6, respectively) the most common form under physiological conditions is the ring structure. Pentoses (non-ring) have three chiral carbons, and thus eight stereoisomers. Hexoses have a large number of possible conformations,. Almost all of the hexoses are important biologically, especially glucose (blood sugar) and fructose (fruit sugar). Glucose is the bulding block of polysaccharides.
Of all of the oligosaccharides, disaccharides are the most biologically important. They are composed of two monosaccharide units. Example molecules are sucrose, lactose and maltose.
These molecules are the U-hauls of life - they perform storage functions, such as in the case of starch. Polysaccharides are also the bricks of many organisms - they serve as structural units, such as the chitin and cellulose of plants, or peptidoglycan of bacterial cell walls. Another example is glycogen, a storage polysaccharide of animals and microbes.
As in proteins, the primary sequence of the monomer units determines the structure of the polysaccharide. If a polysaccharide consists of all of the same monomer units, it is a homopolysaccharide. When the monomer units vary, it is a heteropolysaccharide.
To go to an excellent page on Nomenclature and Stereochemistry of carbohydrates, visit this page from Iowa State.
This page, from the United Kingdom, features a Monosaccharide Browser with space filling Fischer Projections.
Need to review your functional groups? Or would you like to see an overview of carbohydrates? Visit the Buglady.
To study the chemistry of biopolymers, including all of the major ones studied in this class, go to visit this great review page.
An excellent reference page, with links to many major journals, is put out by Harvard University
Carbohydrates are an essential part of our diet. However, there is always the worry of too much of a good thing. New diet drugs are constantly under research and testing that are intended to decrease our desire for the sweet stuff. Further, recent research has disproven that hyperactivity in children in linked to monosaccharide consumption.
Back to Home Page
Food for Thought... |
Hands On Physical Science
No-cost, easy-to-use activities teach physical science and develop scientific thinking!
Grades: 1 - 8
***Printable Sample pages are available for each level - just make your selection below. Use "control P" to print***
Developing Critical Thinking through Science presents standards-based, hands-on, minds-on activities that help students learn basic physical science principles and the scientific method of investigation.
Each activity is a 10- to 30-minute guided experiment in which students are prompted to verbalize their step-by-step observations, predictions, and conclusions. Reproducible pictures or charts are included when needed, but the focus is inquiry-based, hands-on science.
Preparation time is short, and most materials can be found around the home. Step-by-step procedures, questions, answer guidelines, and clear illustrations are provided. Practical applications at the end of each activity relate science concepts to real-life experiences.
Have Your Students ever completed science experiments yet they were unable to explain
- why the results came out the way they did?
- how science concepts relate to the real world?
After completing Developing Critical Thinking through Science, your students will be able to answer the whys and hows. They will have a terrific time learning real hands-on science, and you'll enjoy it too! |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
- Main article: Inductive deductive reasoning
Convergent and divergent thinking are the two types of human response to a set problem that were identified by J. P. Guilford.
Convergent production is the deductive generation of the best single answer to a set problem, usually where there is a compelling inference. For example, find answers to the question What is the sum of the internal angles of a triangle?
Divergent production is the creative generation of multiple answers to a set problem. For example, find uses for 1 metre lengths of black cotton.
Guilford observed that most individuals display a preference for either convergent or divergent thinking. Scientists and engineers typically prefer the former and artists and performers, the latter.
There is a movement in education that maintains divergent thinking might create more resourceful students. Rather than presenting a series of problems for rote memorization or resolution, divergent thinking presents open-ended problems and encourages students to develop their own solutions to problems.
Divergent or synthetic thinking is the ability to draw on ideas from across disciplines and fields of inquiry to reach a deeper understanding of the world and one's place in it. |
Kipchak–Bolgar Kipchak–Cuman Kipchak–Nogai and Kyrgyz–Kipchak
The Kipchak languages (also known as the Kypchak, Qypchaq, or Northwestern Turkic languages) are a sub-branch of the Turkic language family spoken by more than 25 million people in an area spanning from Ukraine to China.
The Kipchak languages share a number of features that have led linguists to classify them together. Some of these features are shared with other Turkic languages; others are unique to the Kipchak family.
- Change of Proto-Turkic *d to /j/ (e.g. *hadaq > ajaq "foot")
- Loss of initial *h sound (preserved only in Khalaj. See above example.)
- Extensive labial vowel harmony (e.g. olor vs. olar "them")
- Frequent fortition (in the form of assibilation) of initial */j/ (e.g. *jetti > ʒetti "seven")
- Diphthongs from syllable-final */ɡ/ and */b/ (e.g. *taɡ > taw "mountain", *sub > suw "water")
The Kipchak languages may be broken down into four groups, based on geography and shared features:
- Kipchak–Bulgar (Uralian, Uralo-Caspian): Bashkir and Tatar
- Kipchak–Cuman (Ponto-Caspian): Karachay-Balkar, Kumyk, Karaim, Krymchak. Urum and Crimean Tatar appear to have a Kipchak–Cuman base, but have been heavily influenced by Oghuz languages.
- Kipchak–Nogai (Aralo-Caspian): Nogai, Siberian Tatar, Karakalpak and Kazakh.
- Kyrgyz–Kipchak: Kyrgyz and Southern Altai. |
Lithium-ion batteries are proving themselves to be an effective way to store electricity for everything from laptops to electric vehicles (EV). One limitation they have is the ability to very rapidly store large amounts of energy and put it back out again just as quickly. This situation arises in an EV as it must store electrical energy created by regenerative braking when coming to a stop and then return that energy to accelerate the vehicle up to speed. For this application, supercapacitors are under investigation.
A traditional capacitor stores electric energy statically by charge separation in an electric field between two electrode plates. A supercapacitor (also called an ultracapacitor) stores 10 to 100 times the energy per unit volume of a conventional capacitor and can accept and deliver electric charge must faster than a rechargeable battery.
Supercapacitors store electrical energy via two different storage principles: static double-layer capacitance and electrochemical pseudocapacitance. Each contributes to the energy storage, depending upon the structure and materials used for the electrodes and electrolyte.
As with a traditional capacitor, electrostatic storage of electrical energy is achieved by charge separation between two electrodes. The separation of charge is achieved in a double layer at the interface of the electrode and the electrolyte.
Electrochemical supercapacitors have two electrodes separated by an ion-permeable separator and an electrolyte that ionically connects both electrodes. When a voltage is applied, ions in the electrolyte form electric double layers of opposite polarity to the electrode's polarity. The positive electrode will have a layer of negative ions at the electrode/electrolyte interface, in addition to a charge-balancing layer of positive ions combined onto the negative layer. The negative electrode will have the opposite combination of double layers of positive and negative ions.
Based upon these two electrical energy storage mechanisms, there are three types of supercapacitors. Double-layer capacitors (EDLCs) have higher electrostatic double-layer capacitance than electrochemical pseudocapacitance. Pseudocapacitors use transition metal oxide or conducting polymer electrodes and operate with high electrochemical energy storage. And hybrid capacitors use a combination of electrostatic and pseudocapacitance to store electrical energy. Hybrids use asymmetric electrodes, one which exhibits mostly electrostatic storage and the other mostly electrochemical storage. Electrochemical storage in a hybrid supercapacitor can increase the energy storage capacity by a factor of 10 over double-layer electrostatic storage.
The asymmetry of the electrodes in a hybrid supercapacitor is important. The double-layer electrostatic storage provides high specific power, which the electrochemical pseudocapacitance provides high specific energy.
The electrolyte in a hybrid supercapacitor determines its operating voltage, temperature range, and storage capacity. The electrolyte dissociates into positive cations and negative anions that make the electrolyte electrically conductive between the two electrodes.
Although supercapacitors can store and retrieve electrical charges quickly, their limit has generally been a lack of energy density—typically about a tenth of the energy density of a lithium-ion battery. The big push has been to develop supercapacitors that have enough energy density so that they can work alongside lithium-ion batteries—accepting the energy from regenerative braking and returning that energy during acceleration.
A team of researchers at the Technical University of Munich (TUM) might be getting close to just such a supercapacitor. According to TUM news release, they have developed a novel, powerful as well as sustainable graphene hybrid material for supercapacitors that serves as the positive electrode in a supercapacitor. They combine it with a proven negative electrode based on titanium and carbon.
According to the TUM release, “The new energy storage device does not only attain an energy density of up to 73 Wh/kg, which is roughly equivalent to the energy density of a nickel-metal hydride battery, but also performs much better than most other supercapacitors at a power density of 16 kW/kg.” They say that the secret of the new supercapacitor’s success is the use of a combination of different materials and refer to the supercapacitor as "asymmetrical."
The Key is Graphene
Graphene consists of a single layer of carbon atoms, arranged in a two-dimensional lattice structure. Graphene is the strongest material ever tested and has electrical properties that make it attractive for use as an electrode. The positive electrode of the TUM supercapacitor is made with chemically modified graphene, combined with a nanostructured metal-organic framework, a so-called MOF.
"The high-performance capabilities of the material is based on the combination of the microporous MOFs with the conductive graphene acid," said Jayaramulu Kolleboyina, a guest scientist who worked at TUM on the research. The graphene MOF material has a large specific surface area and controllable pore sizes, along with high electrical conductivity. The large surface area allows a supercapacitor electrode to collect a large number of charge carriers, increasing its ability to store electrical energy.
Another advantage of the new graphene material is its strong chemical bonds between its components. Strong stable bonds mean that more charging and discharging cycles are possible, without a degradation in performance. Typical lithium-ion batteries have a useful life of about 5,000 charge and discharge cycles. The supercapacitor developed by TUM is said to exhibit 88% of its initial capacity after 10,000 cycles.
The graphene-MOF combination shows great promise, not only for high energy density supercapacitors but also for other types of electrodes used in electrical and energy storage devices.
Kevin Clemens is an engineering consultant who has worked on automotive and environmental projects for more than 40 years. |
Instructions: Identify AND Explain the significance of 3 of the following 5 terms, concepts, or cases. The “identification” should take the form of a definition and/or explanation of the term, concept, or case. The “significance” can take the form of its significance to American government, an example, or an application to current events. Word Limit: 100 words max (about 3-5 sentences max) for Each of the 3 ID Questions chosen
1. Wickard v. Filburn
2. The Tenth Amendment:
3. Voting Rights Act of 1965Answer
4. The Brandenburg Test
5. Miranda v. Arizona
Part II: Short Answer Questions – Civil Liberties / Civil Rights
Do you need help with this assignment or any other? We got you! Place your order and leave the rest to our experts. |
Definition of Bacteria
Bacteria - A diverse group of ubiquitous microorganisms all of which consist of only a single cell.
Bacteria can be characterized in a number of ways, for example their reaction with Gram's stain or on the basis of their metabolic requirements (e.g. whether they require oxygen) or shape. A bacterial cell may be spherical (coccus), rodlike (bacillus), spiral (spirillum), comma shaped (vibrio), corkscrew-shaped (spirochaete), or filamentous. The majority of bacteria range in size from 0.5 to 5 um. Many are motile, bearing flagella. Some can produce endospores.
In general bacteria reproduce only asexually, by simple cell division, but a few groups undergo a form of sexual reproduction in the form of conjugation.
Bacteria are largely responsible for decay and decomposition of organic matter, creating by-products such as carbon, oxygen, nitrogen and sulphur when using organic matter as a fuel. A few bacteria obtain their energy by means of photosynthesis (such as cyanobacteria), some are saprotrophs and others are parasites, causing disease.
The symtoms of bacterial infections are produced by toxins released by the bacteria. |
Lichens are epiphytic and absorb their nutrients from the air. This means they collect pollutants like nitrogen, sulfur, and toxic metals which can be analyzed in a lab. Collecting lichen tissue to analyze its elemental profile helps make determinations about the air quality of remote and sensitive places and is part of some mandated monitoring efforts. There are many wilderness areas that are too remote to use mechanical monitoring equipment. Lichen biomonitoring helps fill in these gaps. |
Egyptologist: The life of slaves in Egypt was not as hard as we think
They could marry Egyptian women and had similar jobs as other inhabitants of the Nile Valley. Contrary to popular belief, they did not build the pyramids, and their life was not harder than that of Egyptians doing hard jobs. Dr. Andrzej Ćwiek talks about slaves in Egypt.
It was a long-held view in science that the Egyptian pyramids were built by thousands of oppressed slaves. Later, the researchers believed that peasants were forced to build them - the construction of tombs was supposed to take place only outside the season of agricultural work.
"None of these concepts survived the test of time. Pyramids and other monumental royal structures were built by highly qualified workers who devoted their entire lives to this activity" - says Egyptologist from Adam Mickiewicz University and the Archaeological Museum in Poznań, Dr. Andrzej Ćwiek. He adds that this does not mean that there were no forced labourers in Egypt.
"Slaves in our contemporary understanding of the word were basically only prisoners of war, foreigners" - says Dr. Ćwiek. Their largest number appeared on the Nile in the imperial period, the New State (1550-1069 BC), when the Egyptian borders expanded greatly as a result of successful conquests. The largest number of Asians, inhabitants of Syro-Palestine, and many Nubians from the area of "black Africa" came to the land of the pharaohs this way.
"But the economy for Egypt had never been based on slavery, as was the case in Rome, for example. Forced labourers were not a homogeneous and cohesive social group" - says the Egyptologist. Without their work, the foundations of the state would not crumble.
What was their fate? Contrary to popular belief, according to Dr. Ćwiek their life did not have to be harder than that of Egyptians performing hard work, for example in quarries.
Dr Ćwiek emphasizes that slaves usually assimilated quickly in the local population and did not constitute a separate social group. Their legal situation was not clear; they were not a separate and closed social group. They were treated as people and had the right to private property. "There were even cases of slaves marrying Egyptian women!" - the Egyptologist notes. This means that they were not stigmatised or commonly despised.
Even the Egyptians who acquired building material in the quarries were a highly qualified workforce, as were the craftsmen who processed stone blocks. Slaves, usually prisoners of war, were sent to such teams and probably were treated just like the other workers, the Egyptologist believes.
We also know that slaves worked in Deir el-Medina, a village of workers who were building the tombs in the Valley of the Kings near Luxor in Upper Egypt. They prepared food and washed clothes of craftsmen. In general, they often worked as servants in private homes.
In exceptional cases, foreigners would make staggering careers. This was the case of Mai-per-heri, who lived during the reign of Hatshepsut (15th century BC). Although he was of Nubian origin, possibly a prisoner of war or a hostage brought up in the court of the Pharaoh, he was buried in the Valley of the Kings. "His Egyptian name, which means +Lion on the battlefield+ may suggest that the reason for this distinction could have been wartime merits, maybe even saving the life of the Pharaoh" - says Dr. Ćwiek.
There are many indications that slaves - although there was no single term to name them in Egypt - quickly adjusted to over local culture, learned the language, and took Egyptian wives.
"It is quite puzzling as many documents from the era described the Asians or Nubians negatively, with the worst epithets. But once they were enslaved, they were treated quite well compared to other ancient cultures" - says Dr. Ćwiek.
In a way, even the Egyptian workers and craftsmen were not free. They were not allowed to move freely around the country or change their professions. "But they did not consider themselves prisoners - this was how the state of the world`s first civilization was structured. There was no question of individual freedom of the people. On the contrary - each of the inhabitants of Egypt had a strictly designated social role and usually performed it" - says the Egyptologist.
Peasants, who were the majority in the Egyptian society, usually farmed fields that belonged to either the Pharaoh or high dignitaries. They were forced to pay tributes to them. "But it would be a stretch to say that they were oppressed. It was a system that provided +social insurance+ - in times of drought, the owner of the fields would open granaries to peasants. This way the superiors provided their subjects with a sense of security" - says Dr. Ćwiek.
PAP - Science in Poland
Author: Szymon Zdziebłowski
szz/ agt/ zan/ kap/ |
By James C. Currens, Kentucky Geological Survey
A karst landscape has sinkholes, sinking streams, caves, and springs. Kentucky is one of the most famous karst areas of the world. Much of the state's beautiful scenery, particularly the horse farms of the Inner Bluegrass, results from the development of karst landscape. Karst underlies regions of major economic importance to the state. Many of Kentucky's cities, including Frankfort, Louisville, Lexington, Bowling Green, Elizabethtown, Munfordville, Hopkinsville, Russellville, Princeton, Lawrenceburg, Georgetown, Winchester, Paris, Somerset, Versailles, and Nicholasville, are partly or entirely underlain by karst. Springs and wells in karst areas supply water to thousands of homes. Much of Kentucky's prime farmland is underlain by karst. A substantial portion of the Daniel Boone National Forest, with its important recreational and timber resources, is underlain by karst. Caves also provide recreational opportunities and contain unique ecosystems. Mammoth Cave, with over 350 miles of passages, is the longest surveyed cave in the world. Two other caves in the state are over 30 miles long, and 10 Kentucky caves are among the 50 longest in the United States.
Although maps that show in detail where the karst terrain of Kentucky occurs have never been made, the areas underlain by rocks on which karst can develop have been mapped. The 1:500,000-scale geologic map (Noger, 1988) can be used to estimate the percentage of karst terrain in the state. Ninety-two of Kentucky's 120 counties contain at least some areas of karst. About 40 percent of the state is underlain by rocks with the potential for at least some karst development (recognizable on topographic maps), and 20 percent of the state has well-developed karst features.
The karst of Kentucky occurs in five principal regions, but also in many scattered locations.
The last major karst area lies along the crest of Pine Mountain in southeastern Kentucky, where geologic forces have thrust the limestone from deep beneath the coal field to the surface. No communities occupy this karst area, but it is a significant recreational and ecological resource, and springs draining from it are important water supplies.
Karst terrain affects the lives of many Kentuckians every day. Most people don't realize they are affected because the costs are hidden in the form of higher taxes and increased cost of living. Often enough, the consequences of living in a karst terrain directly affect people's lives. Of vital concern is protection of groundwater resources. For example, many communities in Kentucky were established near karst springs to take advantage of the reliable water supply. Because of pollution, most of these town springs have long since been abandoned as water supplies. Factories and homes built over filled sinkholes may be damaged as the fill is transported out of the sinkhole and the soil cover collapses. Also, structures built in sinkholes are often vulnerable to flood damage.
Flooding in a karst area.
Features of a Karst Landscape
The term "karst" is derived from a Slavic word that means barren, stony ground. It is also the name of a region in modern Slovenia near the border with Italy that is well known for its sinkholes and springs. The name has been adopted by geologists as the term for all such terrain.
A karst landscape most commonly develops on limestone but can develop on several types of rocks, such as dolomite, gypsum, and salt. The karst terrains of Kentucky are mostly on limestone and formed over hundreds of thousands of years. As water moves underground, from hilltops toward a stream through tiny fractures in the limestone bedrock, the rock is slowly dissolved away by weak acids found naturally in rain and soil water.
An aquifer is any body of rock from which important quantities of drinkable water may be produced. Springs are sites where groundwater emerges from an aquifer to become surface water. Springs occur along creeks and rivers where the water table meets the land surface. They also occur where rocks that do not allow water to flow easily, such as shale, underlie or have been faulted against permeable rock. The impermeable rock blocks the flow of the groundwater, forcing it to the surface. Karst springs occur where the groundwater flow has concentrated to dissolve a conduit or cave in soluble rock. The groundwater basin of a karst spring collects drainage from all the sinkholes and sinking streams in its drainage area. The water flowing from each sinkhole joins together underground to form ever-increasing flow in successively larger passages, which discharge at the spring. Karst springs (also known as "cave springs") can have large openings and discharge very large volumes of water. The soil cover, narrow fractures, small conduits, and larger cave passages collectively form a karst aquifer.
A sinkhole is any depression in the surface of the ground into which rainfall is drained. Karst sinkholes form when a fracture in the limestone bedrock is preferentially enlarged. Sinkholes form in two ways. In the first way, the bedrock roof of a cave becomes too thin to support the weight of the bedrock and the soil material above it. The cave roof then collapses, forming a collapse sinkhole. Bedrock collapse is rare, and the least likely way a sinkhole can form, although it is commonly assumed to form all sinkholes. The second way sinkholes form is much more common and much less dramatic. As the rock is dissolved and carried away underground, the soil gently slumps or erodes into a dissolution sinkhole. Once the underlying conduits become large enough, insoluble soil and rock particles are carried away too. Dissolution sinkholes form over long periods of time, with occasional episodes of soil or cover collapse.
All of the dissolved limestone and soil particles eroded from the bedrock to form a sinkhole pass through the sinkhole's "throat" or outlet. The throat of a sinkhole is sometimes visible, but is commonly roofed by soil and broken rock and can be partly or completely filled with rubble. This opening can vary from a few inches in diameter to many feet. Normally, water flows out of the sinkhole throat to a conduit that drains to a spring. When sinkhole throats are totally blocked and little water can flow out, a "sinkhole pond" may form, a common sight in the Pennyroyal. Sinkhole ponds are temporary features and last only as long as the throat is tightly plugged.
Swallow holes are points along streams and in sinkholes where surface flow is lost to underground conduits. Swallow holes range in diameter from a few inches to tens of feet, and some are also cave entrances. Swallow holes are often large enough to allow large objects such as tree limbs and cobble-size stones to be transported underground. This means that waste dumped into sinkholes can easily reach underground streams. It is not uncommon for discarded automobile tires and home appliances to be found deep within caves with flowing streams. Likewise, sewage, paint, motor oil, pesticides, and other pollutants are not filtered from water entering a karst aquifer.
A karst window is a special type of sinkhole that gives us a view, or window, into the karst aquifer. A karst window has a spring on one end, a surface-flowing stream across its bottom, and a swallow hole at the other end. The stream is typically at the top of the water table. Karst windows develop by both dissolution and collapse of the bedrock. Many karst windows originated as collapse sinkholes.
Karst locations are shown on a map of karst in Spencer County.
More information on karst is available on the KGS Web site.
Previous--Next--Back to "Groundwater Resources in Kentucky" |
There Are Two Types of Physical Qunantities.First is Scalar And Second is Vestors.Scalar Are need To Magnitude and the power unit while vector need the meagnitude. the unit proper and direction.scalar can be added Subtracted,Multipled and Divied by use the Simple Mathematical Rules.vector Can be add to use Head to tail rule and use to by rectangular components method.vector can besubtracted the head to tail rule.
Physical Quantities :-
There are Two Types of Physical Quantities and following two types
Physical Qunantities that can be completley specified by their magnitude and suitabllw unit are know as Salar .They Can be Added,Subtracted,Multiplied And Divied By Mathematical Ruels.
Example:-Mass, Distance, Speed, Energy, Work, Area, Volume, Temperrature,Time, Money, Electric Current. Etc
Physical Quantities that can be completly Specfied by their Magnitude suitable unit are knowas cector they cannot to added subtracted and divided but we use methodes of vector addition for these purposes
Example:-Displacement celocity acceleration force momentum torque angular velocity etc
Represention of a vector:-
Generally a vector is represented by a blod force type E.g A or By putting an arrow or a bar above or bleow the letter such as a A The magnitude of vector is represented in light faced italics or by its modulus for example magnitude of the vector
Types Of vector :-
- Unit Vector
- Null Vector
- Resultant Vector
- Negative Vector
- Component Vector
- Position Vector
A vector is magnitude is one called a unit vector.its represented by a cap letter of in case A Any vector can be written by A with arrow
it is vector having Zero magnitude
When We add two or more vector is called Resultant vector.For example we add three vector A1 A2 A3 acting on a body at O O is Called Resultant Vector.
A vector having the same magnitude as tha na given vector but opposite direction is called Negative vector
A vector can be resolved in to two or more than such vector these vector are called a component vector.
A vector is plane or in space which joins a given point in the plane or space with the orign the magnitude of the vector is equal to rhe distance.B/W the given point and the orign Position Vector help us to locate the position of a point in the plane or in space |
Reflection of Electromagnetic Waves by a Conducting Surface:
In view of the way in which signals propagate in waveguides, it is now necessary to consider what happens to Reflection of Electromagnetic Waves when they encounter a conducting surface. An electromagnetic plane wave in space is transverse-electromagnetic, or TEM. The electric field, the magnetic field and the direction of propagation are mutually perpendicular. If such a wave were sent straight down a waveguide, it would not propagate in it. This is because the electric field (no matter what its direction) would be short-circuited by the walls, since the walls are assumed to be perfect conductors, and a potential cannot exist across them. What must be found is some method of propagation which does not require an electric field to exist near a wall and simultaneously be parallel to it. This is achieved by sending the wave down the waveguide in a zigzag fashion (see Figure 10-3), bouncing it off the walls and setting up a field that is maximum at or near the center of the guide, and zero at the walls. In this case the walls have nothing to short-circuit, and they do not interfere with the wave pattern set up between them. Thus propagation is not hindered.
Two major consequences of the zigzag propagation are apparent. The first is that the velocity of propagation in a waveguide must be less than in free space, and the second is that waves can no longer be TEM. The second situation arises because propagation by reflection requires not only a normal component but also a component in the direction of propagation (as shown in Figure 10-4) for either the electric or the magnetic field, depending on the way in which waves are set up in the waveguide. This extra component in the direction of propagation means that waves are no longer transverse-electromagnetic, because there is now either an electric or a magnetic additional component in the direction of propagation.
Since there are two different basic methods of propagation, names must be given to the resulting waves to distinguish them from each other. Nomenclature of these modes has always been a perplexing question. The American system labels modes according to the field component that behaves as it did in free space. Modes in which there is no component of electric field in the direction of propagation are called transverse-electric (TE, see Figure 10-5b) modes, and modes with no such component of magnetic field are called transverse-magnetic (TM, see Figure 10-5a). The British and European systems label the modes according to the component that has behavior different from that in free space, thus modes are called H instead of TE and E instead of TM.
Dominant mode of operation:
The natural mode of operation for a waveguide is called the dominant mode. This mode is the lowest possible frequency that can be propagated in a given waveguide. In Figure 10-6, half-wavelength is the lowest frequency where the waveguide will still present the properties discussed below. The mode of operation of a waveguide is further divided into two submodes. They are as follows:
- TEm,n for the transverse electric mode (electric field is perpendicular to the direction of wave propagation)
- TMm,n for the transverse magnetic mode (magnetic field is perpendicular to the direction of wave propagation)
m = number of half-wavelengths across waveguide width (a on Figure 10-6)
n = number of half-wavelengths along the waveguide height (b on Figure 10-6)
Plane waves at a conducting surface:
Consider Figure 10-7, which shows wave-fronts incident on a perfectly conducting plane (for simplicity, reflection is not shown). The waves travel diagonally from left to right, as indicated, and have an angle of incidence θ.
If the actual velocity of the waves is νc, then simple trigonometry shows that the velocity of the wave in a direction parallel to the Reflection of Electromagnetic Waves by a Conducting Surface, νg, and the velocity normal to the wall, νn, respectively, are given by
As should have been expected, Equations (10-1) and (10-2) show that waves travel forward more slowly in a waveguide than in free space.
Parallel and normal wavelength:
The concept of wavelength has several descriptions or definitions, all of which mean the distance between two successive identical points of the wave, such as two successive crests. It is now necessary to add the phrase in the direction of measurement, because we have so far always considered measurement in the direction of propagation (and this has been left unsaid). There is nothing to stop us from measuring wavelength in any other direction, but there has been no application for this so far. Other practical applications do exist, as in the cutting of corrugated roofing materials at an angle to meet other pieces of corrugated material.
In Figure 10-7, it is seen that the wavelength in the direction of propagation of the wave is shown as λ, being the distance between two consecutive wave crests in this direction. The distance between two consecutive crests in the direction parallel to the conducting plane, or the wavelength in that direction, is λp, and the wavelength at right angles to the surface is λn. Simple calculation again yields
This shows not only that wavelength depends on the direction in which it is measured, but also that it is greater when measured in some direction other than the direction of propagation.
Any Reflection of Electromagnetic Waves has two velocities, the one with which it propagates and the one with which it changes phase. In free space, these are “naturally” the same and are called the velocity of light, νc, where νc is the product of the distance of two successive crests and the number of such crests per second. It is said that the product of the wavelength and frequency of a wave gives its velocity, and
For Figure 10-7 it was indicated that the velocity of propagation in a direction parallel to the Reflection of Electromagnetic Waves by a Conducting Surface is νg = νc sin θ, as given by Equation (10-1). It was also shown that the wavelength in this direction is λp = λ/sin θ, given by Equation (10-3). If the frequency is f, it follows that the velocity (called the phase velocity) with which the wave changes phase in a direction parallel to the Reflection of Electromagnetic Waves by a Conducting Surface is given by the product of the two. Thus
where νp = phase velocity
A most surprising result is that there is an apparent velocity, associated with an Reflection of Electromagnetic Waves at a boundary, which is greater than either its velocity of propagation in that direction, vg, or its velocity in space, νc. It should be mentioned that the theory of relativity has not been contradicted here, since neither mass, nor energy, nor signals can be sent with this velocity. It is merely the velocity with which the wave changes phase at a plane boundary, not the velocity with which it travels along the boundary. A number of other apparent velocities greater than the velocity of light can be shown to exist. For instance, consider sea waves approaching a beach at an angle, rather than straight in. The interesting phenomenon which accompanies this (it must have been noticed by most people) is that the edge of the wave appears to sweep along the beach must faster than the wave is really traveling, it is the phase velocity that provides this effect. |
Juneteenth: a reminder that change comes slowly
Today is Juneteenth, the commemoration of the actual emancipation of slaves in Texas and other parts of the South on June 18 and 19 in 1865, which came considerably later than the official end to slavery (January 1, 1863). On June 18, Union General Gordon Granger and his troops came to Galveston, Texas, to enforce emancipation. According to legend, Granger stood on the balcony of one of Galveston’s grand houses and read the following:
“The people of Texas are informed that, in accordance with a proclamation from the Executive of the United States, all slaves are free. This involves an absolute equality of personal rights and rights of property between former masters and slaves, and the connection heretofore existing between them becomes that between employer and hired labor. The freedmen are advised to remain quietly at their present homes and work for wages. They are informed that they will not be allowed to collect at military posts and that they will not be supported in idleness either there or elsewhere.“
Interesting choice of words. The emancipated slaves, however limited the real change in their lives, did not “remain quietly” at home but had some considerable celebrations. Their world didn’t shift much as most remained de facto slaves. But there was the promise of something better, even if it would take another hundred years to come.
The naming of the celebration Juneteenth is a bit of linguistic playfulness combining June and nineteenth. |
- There are two phalanges in the thumb, named proximal and distal phalanges.
- There are three phalanges in each of the other fingers, named proximal, middle, and distal phalanges.
- The phalanges of a finger along with the associated metacarpal is called a ray.
- The phalanges articulate with:
- The word phalanges is plural. Singular for phalanges is phalanx (there is no word phalange).
- Each phalanx is a long bone, therefore it has expanded ends. The proximal expanded end is called the base; the distal expanded end is called the head.
- Because there are only two phalanges in the thumb, the thumb has one interphalangeal joint.
- Because the other fingers have three phalanges each, each finger has two interphalangeal joints, each finger having a proximal interphalangeal joint and a distal interphalangeal joint. |
A Level Chemistry Quizzes
A Level Chemistry Quiz Answers - Complete
Elimination Reactions quiz questions and answers, elimination reactions MCQ with answers PDF 232 to solve A Level Chemistry mock tests for online college programs. Solve Halogenoalkanes trivia questions, elimination reactions Multiple Choice Questions (MCQ) for online college degrees. Elimination Reactions Interview Questions PDF: periodic table electronegativity, buffer solutions, alcohols reactions, mole calculations, elimination reactions test prep for free online college classes.
"If ethanol will be used in the elimination reaction of Halogenoalkanes it will produce" MCQ PDF with choices alkenes, alkanes, ketone, and carbonyl for ACT practice test. Practice halogenoalkanes questions and answers to improve problem solving skills for colleges that offer online degrees.
MCQ: If ethanol will be used in the elimination reaction of Halogenoalkanes it will produce
MCQ: The simplest whole-number ratio which is representing 1 molecule is called
MCQ: Alcohols react with oxygen to form
MCQ: An aqueous mixture of sodium ethanoate is a buffer solution in
MCQ: Period 3 element that has the highest electronegativity is |
“What is it about the story of ‘The First Thanksgiving’ that makes it essential to be taught in virtually every grade from preschool through high school?” begins the post “Deconstructing the Myths of ‘The First Thanksgiving’ “ by Judy Dow (Abenaki). This article includes 11 myths about the first Thanksgiving, notes and sources, recommended books about Thanksgiving, and primary sources from a colonialist perspective. Find this post and more under Resources at http://oyate.org.
A beautiful book for all ages is 1621 A New Look At Thanksgiving by Catherine O’Neill Grace and Margaret M. Bruchac with Plimoth Plantation, published by National Geographic. This book has informative photos and drawings, factual information from a native point of view, timelines, and recipes. From the back cover: “In the fall of 1621, English colonists and Wampanoag people feasted together for three days. Join National Geographic and Plimoth Plantation for a new look at the real history behind the event that inspried the myth of The First Thanksgiving.”
Giving Thanks, A Native American Good Morning Message by Chief Jake Swamp is a wonderful book that reminds us about the importance of being grateful for all that Mother Earth has provided for us. |
While we normally think of naturalization as a two step process whereby the alien first declares his intent to become a citizen and then petitions for naturalization, there were exceptions to that procedure.
For example, from 1824 to 1906, aliens who came to the U.S. while under age 18 could effectively declare their intent to become a citizen at the same time they filed their petition for naturalization once they had reached age 21 or more and had lived in the U.S. for five years (three of which as a minor). Let the law speak for itself:
So, to summarize: the alien still had to meet the five year requirement for residency, and three years of that had to be while he was a minor.
Many courts used specific forms for these cases that combined declaration of intent language and petition language in one document, and they made sure to include the word “minor.” Some may say the applicant “arrived as a minor,” while others will have the words “Minor Naturalization” emblazoned across the title or as a watermark.
For more on naturalization, see Naturalization Records and Women and Naturalization, Part I and Part II. |
Posted on August 31, 2011
by Dr. Judy HorrocksMost police and fire department personnel have heard of autism, but really know very little about the disability. Introducing your child to the local personnel may be very helpful. Children with autism have difficulty with generalization of concepts that they are taught or told. They may recognize a police uniform or car but not really understand the meaning of the symbols. If the local police or fire department are aware of your child, that will be beneficial in any emergency.
Emergencies are stressful and we know that our children behave erratically in stressful situations. How can we make the situation less stressful?
Communication is the key in any emergency. Your child needs to understand and follow instructions. What is the best form for your child to understand language? Often we use pictures or written instructions and keep meaning very literal. Visuals allow more processing time then quick verbal statements. Have some picture communication symbols ready to use in emergency. Gestures and pointing are typically not very effective for this population. Children diagnosed with an autism spectrum disorder often do not understand body language or figurative language.
If your child begins repeating words or phrases you have just said, they may be trying to process the information. If they are repeating a phrase from a movie or video, this may just let you know your child is anxious. Do not assume all verbal language is communication, repeating a phrase is typically not meant as communication to you. Allow them time to process any verbal information your have provided before adding additional comments. If you feel you have to repeat yourself, then simplify the statement.
Practice simple commands. It is easier to teach “sit down” then “stop.” Children may stop at the command, but for how long? Waiting for the next instruction is not likely to happen! If a child sits down, you will have more time to get to them and/or give the next command. Practice: sit down, come with me, hold my hand, stand up, walk, etc. Be sure to get your child attention before giving the command. Then wait for understanding before repeating, give adequate processing time for your child to respond.
Create routines. Children on the autism spectrum tend to like routines and find routines comfortable and calming. Practicing some simple routines regarding leaving your house and staying near the curb to wait for help would be beneficial in an actual emergency. If you find yourself in an actual emergency, try to use commands that follow familiar routines to keep your child calm.
Be aware of sensory issues. Children with autism may have difficulty with lights (visual defensiveness) noise (auditory defensiveness) or touch ( tactile defensiveness). Do not interpret hands in their ears or lack of eye contact as a sign of disrespect. If your child is lost, ask the police to avoid use of the siren or flashing lights when searching for your child; this may actually cause physical discomfort. Let them know that your child may not come to them if called, provide them with some simple commands that your child would understand.
Since their sensory systems are impaired and communication difficult, your child may not recognize injury. People with autism may not ask for help or show any indications of pain and may be fearful of your touch. Avoid touching the child, if necessary use firm grip. Repetitive behavior may not need to be stopped unless it is self injurious or at risk of injury to others.
Be aware that change produces anxiety and a high level of anxiety impairs thinking and increases sensitivity. Certain times of the year have more change, such as breaks from educational facilities, changes in daylight savings time, etc. These times of year are more likely to create emergency situations. Children may require closer supervision during these times of year.This entry was posted in About Autism by Judy. Bookmark the permalink. |
Hot and humid: Using minerals from ancient soils, ETH researchers are reconstructing the climate that prevailed on Earth some 55 million years ago. Their findings will help them to better assess how our climate might look in the future.
Between 57 and 55 million years ago, the geological epoch known as the Paleocene ended and gave way to the Eocene. At that time, the atmosphere was essentially flooded by the greenhouse gas carbon dioxide, with concentration levels reaching 1,400 ppm to 4,000 ppm. So it’s not hard to imagine that temperatures on Earth must have resembled those of a sauna. It was hot and humid, and the ice on the polar caps had completely disappeared.
The climate in that era provides researchers with an indication as to how today’s climate might develop. While pre-industrial levels of atmospheric CO2 stood at 280 ppm, today’s measure 412 ppm. Climate scientists believe that CO2 emissions generated by human activity could drive this figure up to 1,000 ppm by the end of the century.
Using tiny siderite minerals in soil samples taken from former swamps, a group of researchers from ETH Zurich, Pennsylvania State University and CASP in Cambridge (UK) reconstructed the climate that prevailed at the end of the Paleocene and in the early Eocene. Their study has just been published Geoscience.
The siderite minerals formed in an oxygen-free soil environment that developed under dense vegetation in swamps, which were abundant along the hot and humid coastlines in the Paleocene and Eocene.
To reconstruct the climatic conditions from the equator to the polar regions, the researchers studied siderites from 13 different sites. These were all located in the northern hemisphere, covering all geographical latitudes from the tropics to the Arctic.
"Our reconstruction of the climate based on the siderite samples shows that a hot atmosphere also comes with high levels of moisture," says lead author Joep van Dijk, who completed his doctorate in ETH Professor Stefano Bernasconi’s group at the Geological Institute from 2015 to 2018.
Accordingly, between 57 and 55 million years ago, the mean annual air temperature at the equator where Colombia lies today was around 41 °C. In Arctic Siberia, the average summer temperature was 23 °C.
Using their siderite "hygrometer", the researchers also demonstrated that the global moisture content in the atmosphere, or the specific humidity, was much higher in the Paleocene and Eocene eras than it is today. In addition, water vapour remained in the air for longer because specific humidity increased at a greater rate than evaporation and precipitation. However, the increase in specific humidity was not the same everywhere.
Since they had access to siderite from all latitudes, the researchers were also able to study the spatial pattern of the specific humidity. They found that the tropics and higher latitudes would have had very high humidity levels.
The researchers attribute this phenomenon to water vapour that was transported to these zones from the subtropics. Specific humidity rose the least in the subtropics. While evaporation increased, precipitation decreased. This resulted in a higher level of atmospheric water vapour, which ultimately reached the poles and the equator. And the atmospheric vapour carried heat along with it.
Climate scientists still observe the flow of water vapour and heat from the subtropics to the tropics today. "Latent heat transport was likely to have been even greater during the Eocene," van Dijk says. "And the increase in the transport of heat to high latitudes may well have been conducive to the intensification of warming in the polar regions," he adds.
Not enough time to adapt
These new findings suggest that today’s global warming goes hand in hand with increased transport of moisture, and by extension heat, in the atmosphere. "Atmospheric moisture transport is a key process that reinforces warming of the polar regions," van Dijk explains.
"Although the CO2 content in the atmosphere was much higher back then than it is today, the increase in these values took place over millions of years," he points out. "Things are different today. Since industrialisation began, humans have more than doubled the level of atmospheric CO2 over a period of just 200 years," he explains. In the past, animals and plants had much more time to adapt to the changing climatic conditions. "They simply can’t keep up with today’s rapid development," van Dijk says.
ETH researchers Joep van Dijk and Alvaro Fernandez (right) in search of siderites in Argentina. Once swampy soil, today semi-desert: Siderite deposits in Argentina. ETH’s geologists in search of traces: Stefano Bernasconi (r.) and Alvaro Fernandez search for siderites near Los Angeles (USA) Siderite-bearing soil horizon in Alberhill, California, with petrified roots.
Strenuous search for siderite crystals
Finding the siderites was not easy. For one thing, the minerals are tiny, plus they occur solely in fossil swamps, which today are often found only several kilometres below the Earth’s surface. This made it difficult or even impossible for the researchers to dig up siderites themselves. "We made several expeditions to sites where we believed siderites might occur but we found them at only one of those locations," van Dijk says.
Fortunately, one of the study’s co-authors - Tim White, an American from Pennsylvania State University - owns the world’s largest collection of siderite.
Van Dijk J, Alvarez F, Bernasconi SM, et al.: Spatial pattern of super-greenhouse warmth controlled by elevated specific humidity. Nature Geoscience, published online on 26 October 2020. DOI: 10.1038/s41561-020-00648-2 |
What Is Scumbling In Art?
Scumbling refers to a painting technique which involves applying a thin layer of paint with a dry brush and a loose hand over an existing layer. The idea is to allow parts of the already existing paint below to remain exposed. In most cases, scumbling is used over dried paint, but you can also use it over wet paint. You just need to be careful with the colors blending together if you are scumbling over wet paint. It is most commonly thought of as an oil painting technique, but it can also be used with acrylic or watercolor paints.
When To Use Scumbling In Art?
Here are some of the common uses of scumbling:
-To add texture to the surface. -To create a sense of atmosphere and depth. -To break up a background area to make it less monotonous. -To build up highlights on top of a dark background. -To make slight adjustments to color shapes. -To soften the transition from one color to the next. -To create a broken color effect which takes advantage of optical color mixing.
How To Use Scumbling?
To use the scumbling technique, pick up a small amount of paint straight from a tube with a dry brush and apply it loosely to the canvas. You do not want the paint to blend with the existing colors or to be so thick that the colors below are completely covered; you want the paint to scumble and break on top. You should also vary the strokes you use so that it does not look repetitive.
Tip: When scumbling color on top, use this as an opportunity to keep building up a sense of form and structure. Allow your brush to follow the contour of the subject.
General Tips For Scumbling: In general, it is more effective to scumble light colors on top of darker colors. If you are using watercolors, then instead of scumbling white paint on top for your highlights, you should just leave areas of the paper exposed. The white paper is far more effective than white paint as your lightest light. But you could use scumbling to recover any white areas you accidentally cover up. You should avoid using any additional mediums or solvents when scumbling. In most cases, paint straight from the tube is the most suitable. Opaque color is often used for this technique, rather than transparent color.
#MotleyMuse #Art #ArtClass #ArtLesson #ArtTutorial #ArtScience #Paint #Painting #PaintingLesson #PaintingClass #PaintingTutorial #PaintingParty #PaintAndSip #AcrylicPaintingTechniques #AcrylicPainting #HowToPaint #WallArt #WallDesign #Mural #CanvasPainting #ArtTeacher #PaintingTeacher #EasyPainting #EasyArt #EasyArtClass #EasyArtLession #EasyArtTutorial #Class #Tutorial #LearnToPaint |
Poetry Music Activities - Song Lyrics Assignments and Presentations Bundle
This poetry music activities, assignments, and presentations bundle will engage all your students in poetry analysis using song lyrics! Students will love this modern bundle of music-inspired poetry resources. Included are eye-catching presentations, poetry song lyrics assignments and projects, engaging activities, ready-to-print worksheets, and much more!
By purchasing this bundle, you are saving more than 30% compared to purchasing these items separately!
Teach figurative language in song lyrics with this ready-to-use presentation. This slideshow uses current music lyrics to teach literary devices used in poetry like metaphor, simile, personification, alliteration, hyperbole, and more. This makes an excellent poetry introduction presentation to familiarize students with the common figurative language they will encounter in the poems they read. It will also engage and hook them into your poetry unit with the modern music examples provided! Each slide includes a definition of the literary term as well as an example taken from modern lyrics.
Some of the musical artists include Taylor Swift, Drake, Rihanna, John Legend, Dynamite, Billie Eilish, and many more, so you can be confident your students will be totally engaged.
This final poetry project will hook all your students in! Students will choose a song and analyze the lyrics for content, theme, and literary devices following the detailed project instructions. With music as the topic, you know your students will be instantly engaged, and this project outlines everything they need to know to get started. Included:
- PowerPoint presentation slides to introduce the project to students. The presentation includes discussion questions, information about the two parts of the project, and rubric information to show what a strong response includes.
- Detailed student instructions for completing both sections of the assignment. The first section has student analyze their interpretation of the song lyrics (summary, title analysis, and theme), while the second section has them interpret the use of literary devices and figures of speech within the lyrics.
- Good copy graphic organizers for both sections
- An easy-to-use teacher rubric that makes grading the project quick and easy.
Students will label figurative language in song lyrics with these fun assignments. Students will read excerpts from popular songs and label figurative language examples of metaphor, simile, hyperbole, personification, alliteration, pun, oxymoron, and more.
- Three visually appealing assignment worksheet pages that include lines taken from popular song lyrics that contain literary devices. Students are required to label the figurative language used and provide an explanation for each. With lyrics from Taylor Swift, Beyonce, Harry Styles, Miley Cyrus, Rihanna, Camila Cabello, Sam Smith, and many others your students will be totally engaged.
- Detailed teacher answer keys with a detailed explanation for each type of figurative language used that will make grading or review quick and easy
Using rap song lyrics to teach poetry will help your students see how the two genres have a great deal in common. From sophisticated rhyme and rhythm, literary devices, lyricism, storytelling, theme, social commentary, and emotional impact, poetry and rap have many similarities. Students will use what they learn to compare Langston Hughes' poem "Mother to Son" with Tupac Shakur's song "Dear Mama" and then write their own rap lyrics.
- A 26-slide PowerPoint presentation that uses rap song lyrics to teach poetry. The slides introduce the history of rap, discuss the 6 main commonalities between rap and poetry (rhythm and rhyme, literary devices, lyricism, storytelling, theme, and emotional impact), provide rap lyrics examples for each, and include video discussion prompts. (YouTube is required for opening videos in slideshow).
- A Tupac Hughes poetry rap comparison activity where students answer questions comparing Tupac Shakur's song "Dear Mama" with Langston Hughes' poem "Mother to Son." The poem and song share many elements and themes.
- A detailed answer key for the poetry rap comparison activity questions that make for easy grading or review.
- A creative writing assignment called "Writing Rap Rhymes" where students will write their own rap lyrics according to what they learned in the lesson.
Students will analyze song lyrics as poetry by examining the lyrics of three songs and responding to comprehension and analysis questions. Included are three poetry song analysis assignments, detailed teacher answer keys, and presentation slides to review answers with students.
- Three poetry song analysis assignments that have students examine song lyrics as poetry with three popular songs: Imagine by John Lennon, Lose you to Love Me by Selena Gomez, and Midnight Rain by Taylor Swift. Each assignment includes five analysis questions that require students to refer back to the song lyrics to show text evidence.
- Detailed teacher answer keys for the poetry song analysis assignments. The answers include text evidence and are useful for grading or class review.
- A 16-slide PowerPoint presentation that can be used to review answers with the class.
The Soundtrack Of My Life assignment allows students to make connections between their own lives and the lyrics of the songs they love and to analyze song lyrics as poetry. Even those students who find poetry challenging will be able to connect to this assignment.
- A PowerPoint presentation that will introduce students to each element of the poetry song lyrics assignment. The presentation includes discussion questions, information about the two parts of the project, and rubric information to show what a strong response includes.
- Detailed student instructions for completing both sections of the assignment. The first section has students make text-to-self connections between the lyrics of three songs and their own life (to design the soundtrack of their life). The second section has students examine one of the songs in more detail using poetry analysis techniques. They will summarize the poem, examine the theme, locate literary devices, and consider who might connect to these lyrics.
- A brainstorming page with prompts that will help students choose the three songs they want to highlight on their soundtrack
- Two good copy graphic organizers for both sections of the assignment where they will share their text-to-self connections and poetry song analysis
- An easy-to-use teacher rubric that makes grading the poetry song lyrics assignment quick and easy
>>>Please note the song lyrics are not included in the purchase for copyright reasons, but links are provided to the lyrics online***
⭐️⭐️⭐️⭐️⭐️ This unit was AMAZING!! I could not believe how many high-quality lessons were included for such a great price. My students groaned when I told them we would be learning poetry, but by the end were disappointed the unit was ending! I can't recommend it enough.
⭐️⭐️⭐️⭐️⭐️ I'm teaching an essentials English course for 3 weeks this summer. It's been difficult finding things that keep the students interested since the class is 2 hours and 15 minutes long. They've been engaged the entire time that I've been teaching with these resources! The PowerPoint is easy for them to understand, and the worksheets are also very clean with clear directions. Letting the kids use music with the literary devices has definitely helped to break up the monotony of such a long class. Love it!
⭐️⭐️⭐️⭐️⭐️ This is one of my all-time favorite purchases! This has been so effective and engaging for my students! Thank you for the excellent quality as well as top-notch song selections!
>>> All of the resources in this bundle are also included in my Poetry Resource Bundle. Click click here to learn more.
Pair this poetry writing booklet with our poetry annotation guide:
© Presto Plans
➡️ Want 10 free ELA resources sent to your inbox? Click here! |
By Casey Frye, CCNN Writer
You know what I realized today? Our faces are basically made up of two eyes, two cheeks, one mouth, and a nose. Pretty simple stuff, right? Yet tiny little changes in these features make us look totally different from one another.
If you think about how many people you actually know – like family, friends, teachers, peers, and even celebrities – it’s incredible that we can tell individuals apart at all! In fact, the way the brain recognizes faces has been a mystery to scientists. Now, however, a recent study has found some clues that hint at how it’s possible.
In order to run their study, researchers used a rare group of participants: 11 children who were born with cataracts – or cloudy areas on the eye’s lens that block vision – and got corrective surgery a few years after birth. These children could see just fine, except they had a bit of trouble with distinguishing faces.
That’s not to say the children didn’t know what a face looked like. When they were presented with images of a face and and an object – such as a house – the participants could point out which was which easily. In fact, when the volunteers were shown two pictures of an identical person with only one difference, such as eye color, they could identify what changed. When it came time to view pictures of two different people, however, the children had a hard time telling one person from another.
This is really interesting because normally, humans can identify the differences in less than a second. “We know they are able to categorize faces and other objects, and even distinguish between two faces, but only if they see them from the same view,” says Brigitte Röder, professor of neuropsychology at the University of Hamburg in Germany. “If the perspective changes, or lighting of pictures changes, or the emotional expression, they have a hard time.” Wow, can you imagine not recognizing a friend if you saw them from the side, or if they had a smile on their face?
After Röder and her research team scanned the children’s brains with an electroencephalogram (EEG) – which measures brain waves – the scientists observed some pretty interesting data. Usually, there is a special brain wave called N170 that shows up only after someone has seen a face. In the participants, though, the N170 popped up with any visual stimulus, whether it was a face or a house.
With this data in hand, the researchers are now suggesting there is a limited amount of time known as the “sensitive period.” In this short window of time, an individual’s brain only has two months to learn how to recognize faces! And it’s not just looking at two dots and a smile. In fact, the brain has to acquire a fine set of skills to distinguish tiny differences from one face to another.
“Just two months seems to permanently change the brain’s response to faces, and cause permanent impairments in some face processing skills,” said Cathy Mondloch, professor of psychology at Brock University in Ontario who conducted a similar study published this month. |
What is atopic dermatitis? Atopic dermatitis, often called eczema, is a chronic (long-lasting) disease that causes the skin to become inflamed and irritated, making it extremely itchy. Scratching leads to: Redness. Swelling. Cracking. “Weeping” clear fluid. Crusting. Scaling. In most cases, there are times when the disease is worse, called flares, followed by times when the skin improves or clears up entirely, called remissions. Atopic dermatitis is a common condition, and anyone can get the disease. However, it usually begins in childhood. Atopic dermatitis cannot be spread from person to person. No one knows what causes atopic dermatitis. Depending on
What is rosacea? Rosacea (ro-ZAY-she-ah) is a long-term skin condition that causes reddened skin and a rash, usually on the nose and cheeks. It may also cause eye problems. |
The Milky Way is the galaxy that contains the Solar System. The name describes the galaxy’s appearance from Earth: a hazy band of light seen in the night sky formed from stars that cannot be individually distinguished by the naked eye.
The term Milky Way is a translation of the Latin via lactea, from the Greek γαλαξίας κύκλος (galaxías kýklos, “milky circle”).From Earth, the Milky Way appears as a band because its disk-shaped structure is viewed from within. Galileo Galilei first resolved the band of light into individual stars with his telescope in 1610.
Until the early 1920s, most astronomers thought that the Milky Way contained all the stars in the Universe.Following the 1920 Great Debate between the astronomers Harlow Shapley and Heber Curtis, observations by Edwin Hubble showed that the Milky Way is just one of many galaxies. |
: While reading a news article on your favorite athletic shoes, you are surprised to learn the company uses child labor in Pakistan. Living in the United States, it is hard to imagine children working in factories.
What is child labor? According to the International Labor Organization, child labor is “work that deprives children of their childhood, their potential and their dignity, and that is harmful to physical and mental development.”(2021). The exploitation of child labor continues to be an enormous human rights issue in much of the developing world. Please review the following sites before beginning the assignment:
- International Labor Organization Statistics on Child Labor
- Human Rights Watch
Focus your discussion on the following:
- What are some aspects of globalization and capitalism that have contributed to the economic abuse of children in developing countries?
- In your opinion and based on your research, what can be done to end this problem? |
Teaching food preparation - How to make fried eggs?
Food preparation flashcards
add, bake, barbecue, mix, boil, break, chop, steam, cut, fry, roast, beat, peel, grate, sauté, slice, simmer, pour, combine, stir
Introduce food preparation vocabulary to students and ask random questions: Who cooks your food? Can you cook? Have you ever cooked before? How to make baked potatoes? e.g.. Think about any food students know how to prepare, for example baked potatoes. Asked students what are the steps to prepare baked potatoes and create a short story at the board, following the correct orders.
For more advanced levels, students have to create their own short story making food and demonstrating their recipes later to the class.
Food preparation vocabulary; spelling; word-picture association; word-picture recognition; sentence structure; grammar; reading; conjunctions; adverbs of frequency; create recipes
Fly Swats - Much like the traditional whiteboard game, fly swats, several flashcards are stuck against the whiteboard. Divide students into 2 or 3 teams, giving one player from each teach a “fly swatter” (very fun with real fly swatter, but rolled paper will work). Then Teacher then calls out a work and the first student to “swat” the card wins a point for their team. Then they get to call out the word. –Advanced classes can listen for the word used in a sentence and make up sentences with the new word.
Shipping 0,00 €
Total 0,00 € |
1.3 billion years ago, two orbiting massive black holes, circling each other at 250 times a second, collided in a violent, universe-rippling explosion that sent waves of energy throughout the cosmos. In its wake, a new supermassive black hole formed over 60 times bigger than our Sun.
Fast forward to September 2015, gravitational waves from this ancient cosmic event finally struck Earth. Luckily, the gravitational waves weakened over such a great distance. But what if we weren’t so lucky? If a couple of black holes in our solar system collided, could we survive?
What would happen to Earth if we got hit by massive gravitational waves? What causes these waves? How can we detect them?
In 1916, Albert Einstein made waves himself with his groundbreaking theory of general relativity, known famously by the formula E=MC2, or energy equals mass times the speed of light squared. He proved energy and mass are interchangeable, and that space, or ‘spacetime,’ curves in relation to the energy and momentum of whatever matter and radiation are present.
With great foresight, Einstein inferred the collision of black holes or massive stellar objects create distortions in gravity which are pushed out in all directions. Thanks to incredible developments in atomic measurement, scientists in Washington and Louisiana, in the U.S., at the Laser Interferometer Gravitational-Wave Observatory, or LIGO for short, were able to detect and measure the first ever gravitational waves here on Earth in 2015.
This was a historic moment in science, as it was the first definitive proof of Einstein’s theory of relativity. Using laser interferometry, observatories can detect a change less than ten-thousandth the diameter of a proton. That’s one million times smaller than the width of a human hair. Since then, LIGO has measured 50 detections of gravitational waves.
So what exactly is causing these waves and could they be catastrophic?
There are many sources of gravitational waves, including the collision of black holes, the rotation of asymmetrical neutron stars, supernovae or even remnants of gravitational radiation caused by the Big Bang. These waves travel at the speed of gravity, equal to the speed of light, and emanate outward in all directions. Like a rock being thrown in a pond, the ripples it creates dissipate over great distances and become smaller and smaller.
Luckily, in our pocket of the universe, we are over 400 million light years away from any orbiting black holes. We’re generally safe, but if these black holes happened to be in our solar system, the implications are much more dire.
When gravitational waves pass through a planet, one side is compressed as the other expands, kind of like squeezing a stress ball. Oh yeah, I could use one of those right now. As a result, time and space itself are stretched causing a slight wobble.
But if we were closer to this violent event and the waves were much bigger, this impact could potentially tear our planet apart, triggering powerful continent-splitting earthquakes, volcanic eruptions and epic storms. Earth wouldn’t really be a habitable place anymore, except for maybe extremophiles like bacteria that thrive in hydrothermal vents.
Let’s imagine our Sun was a neutron star of an imperfect, non-spherical shape, sending gravitational ripples outward as it spins. Earth would likely look more like Io, one of Jupiter’s moons, which is put under great gravitational pressure by Jupiter and, as a result, is one of the most volcanically active moons in the Solar System.
Our landscape would be covered in lava and volcanic fallout with an atmosphere made up of toxic gases like hydrogen sulfide. This would cause massive global warming and intense storms. Constant tsunamis, tornados, and well, you get the idea.
Climate chaos. We can count our blessings we are nowhere near any massive objects shooting out gravitational waves, but thankfully we can still measure them and learn more about the complexities of our universe.
Even though we get hit by gravitational waves, they are generally so small we can’t even feel any impact. But on the flipside, what would it be like if we suddenly lost our gravity?
- “Ask Ethan: Could Gravitational Waves Ever Cause Damage On Earth?”. 2020. Medium.
- “What Are Gravitational Waves?”. 2020. LIGO Lab | Caltech.
- “Gravitational Waves Detected 100 Years After Einstein’s Prediction”. 2020. LIGO Lab | Caltech.
- “Gravitational Waves Detected From Neutron-Star Crashes: The Discovery Explained”. Choi, Charles. 2017. space.com.
- “Gravitational Waves Detected, Confirming Einstein’S Theory”. 2020. nytimes.com.
- “Observation of Gravitational Waves from a Binary Black Hole Merger” 2020. journals.aps.org. |
The Earth’s climate has always undergone changes since the pre-industrial era and continues to do so nowadays.
Scientists have determined that there has always been gradual changes in the Earth’s climate, but the ones that have taken place in recent years have been very dramatic.
Climate change is a variation in the average temperature of the planet, which usually takes thousands of years.
Which gases cause climate change?
Carbon dioxide (CO2) is the most abundant greenhouse gas
- It is produced by combustion of carbon-based fuels (carbon, gas, diesel, coal, and other petroleum products).
- Deforestation causes an increment of CO2, since trees capture it as part of their photosynthesis and produce O2.
Methane (CH4) has an impact 21 times greater than CO2
- This gas is produced by ovens and dryers, forest fires, waste through animal farming like cows, rice plantations, landfills and waste waters.
Chlorofluorocarbons (CFCs) are produced entirely by humans
- It is present in refrigeration and air conditioning systems, evaporation of industrial solvents and production of plastic foam.
- It remains in the atmosphere between 60 and 400 years.
Nitrogen dioxide (NO2)
- It is produced by power plants that use coal, car mufflers, animal waste or nitrate contaminated landfills. Another source is the degradation of nitrate-based fertilizers on the soil.
Other gases: water vapor.
How does climate change affect us?
Climate change may cause global temperatures to rise between 1.8°C and 4° C by the end of this century.
Even if all greenhouse gas emissions suddenly stop -something that is not going to happen-, the inertia of the Earth’s climate system is so big, global warming will continue for several decades due to the volume of emissions that have already been released into the atmosphere.
Climate change will cause:
- Increased temperature of air, land and oceans.
- Heavier storms, cyclones and hurricanes.
- Greater floods and more intense droughts.
- Precipitation patterns and increasingly unpredictable winds.
Climate change and tourism
Despite all the benefits associated with tourism, for it being one of the best ways to distribute wealth among populations, this sector is one of the major contributors to the greenhouse effect.
The gases that cause this weather phenomenon by tourist activities emissions (including air travel), are equivalent to its contribution to the world economy, which is estimated at 5% according to the World Tourism Organization.
Moreover, the tourism sector is one of the economical activities dependent on climate change, because the weather determines the demand in tourism seasonality and influences operating costs such as heating or cooling, irrigation, water and food supply, among others.
It is expected that climate change impacts:
- Economic expenses primarily on infrastructure, especially in the most vulnerable areas such as coastal areas.
- Schedule changes negatively affect tourism activities.
In addition, climate change will cause some environmental conditions that may scare away tourists:
- Forest fires.
- Contagious diseases.
- Extreme phenomenon (tropical cyclones).
- Insect pests or waterborne (jellyfish or algae blooms).
Therefore, companies linked to tourism activities are challenged to implement additional measures to be prepare for emergency situations, which will raise the costs for insurance, back-up systems for water and electricity supply and evacuation, as well as a disruption in the business operation.
These are the two most popular vacation activities being affected currently:
Beach tourism: Erosion by intense storms and the proliferation of algae and jellyfish infestation due to higher sea temperatures.
Winter Sports: The ski resorts have had to deal with the lack of snow and a shorter season. The devastating hurricanes, cyclones, floods and droughts -many times accompanied by violent winds- have become more frequent in recent years.
Climate change and wildlife
Nature, which is one of the main reasons why tourists visit certain destinations, is affected by the increase in temperature.
Climate change causes the loss of wildlife habitat for many organisms and changes migration patterns of some species of whales, birds, butterflies, among others. Therefore, it is expected that the affected areas with fewer species become less attractive to tourists.
Small islands and low-lying coastal areas are at higher risk of any sea level rise which is caused by the melting of the polar ice caps.
Climate change and Monteverde
Research over the last 40 years in Monteverde, brought that the area receives 38% more rain and that 300% of the days are dry. The increase in temperature has caused the clouds to remain at higher elevations and a decrease of the haze that has always characterized the mountain forests.
This situation has caused:
- More damaged roads.
- Impact on public health.
- Affected agricultural activities.
- Less clouds and more blue skies.
- Extinction of some species.
- Altitudinal migration of some species: some species of hummingbirds have moved to higher elevations. This has decreased in the populations of some amphibians and reptiles, sensitive to the decrease of cloudy days, especially lizards and frogs.
- Changes in precipitation, temperature, cloud cover and light can affect the production of nectar of some plant species, which can affect the abundance and distribution of hummingbirds.
Climate change and marine organisms
Marine organisms are also affected by climate change, since the increase in temperature and CO2 concentration causes a change in the wind pattern, strong hurricanes and waterspouts. In addition, marine currents and acidity are altered, causing also change on the supply of nutrients and the food chain.
Migration and distribution of species of marine species and organisms are altered, and studies have detected the presence of fewer boreal species.
There is a reduction in phytoplankton, zooplankton, fish and algae. Organisms attached to the seabed (benthic) are the most affected.
Most of the world’s coral reefs will die with an increase of only 3°C of the sea temperature, and the multitude of colorful fish and sea creatures living in the reefs would also disappear. Half the coral reefs of the Caribbean, for example, has disappeared since 2005 due to coral bleaching.
Climate change and global initiatives
Due to the critical situation the planet is in, 196 countries part of the UN Framework Convention on Climate Change. In December 2014, the last conference was held, their main achievement was the end of the existing division between developed and developing countries since 1992, when the obligations of the countries depended on their level of development.
Lima agreement imposes obligations on countries regardless of whether they are part of the developed countries or not.
The responsibilities of the countries are differentiated according to the respective capabilities and national circumstances.
Climate change in Costa Rica
The contribution of Costa Rica in the production of greenhouse gases is small compared to giants like the US, China, Japan and India, but:
- In Costa Rica, 70% of greenhouse gases comes from private transport and cargo: there are about 1 million vehicles in the country.
- MINAET created the Carbon Neutral Program (Programa País Carbono Neutralidad).
Costa Rica aims to achieve carbon neutrality by 2021 through:
- GHG Mitigation
- Technology Transfer and Capacity Building
- System of precise, reliable and measurable metrics (MRV)
- Public awareness, creating culture and changing consumer habits.
- Adapting to climate change to reduce the vulnerability of key sectors and regions.
The positive side: Financing the fight against climate change
International cooperation funds projects aimed at preserving the environment, at the same time seeking an economic development. For example, the French Development Agency (AFD), has funded the following projects:
- Ethiopia: Wind farms.
- Madagascar: Forest management.
- Colombia: Clean Urban Transport.
- Indonesia and Vietnam: National plans on climate.
Organizations like the World Bank and others have begun to use “green bonds or climate bonds”. These bonds are similar to the traditional ones, with the difference that they are asset-backed with investors that contribute to sustainable development and climate mitigation. However, it is necessary to develop measures to ensure certification, since it is a weak spot.
The downside: The myth of zero emissions
“Zero Emissions” is one of the latest solutions that a group of scientists has proposed to counter the emissions from burning coal, oil, and gas that are heating up the planet.
However, this undeveloped technology is controversial because it suggests that the planet can continue to use fossil fuels on a large scale, as long as these emissions are off-set.
It is based on the Bio-energy with carbon capture and storage, which entails planting a huge amount of grass and trees, burning the biomass to generate electricity, capturing the CO2 that is emitted, and pumping it into geological reservoirs underground.
However, it may present the following problems:
- Gas leaks with serious environmental and social consequences.
- Appropriation of land for crops from poor people, similar to the way it has happened with biofuels.
- The use of this methodology implies that around 218-990 million acres should be allocated for the plantation of the foxtail palm tree, in order to capture one billion tons of CO2.
- High production of nitrous oxide as fertilizers will exacerbate climate change.
These proposals draws attention due to lucrative business involving the production of more than 67,000 million barrels of oil, which is three times the volume of proven oil reserves in the US, according to data from the Energy Department in the US. |
This fruit-themed activity will help your students connect repeated addition for 5's and 6's in order to see how multiplication works. It serves as a great visual for budding mathematicians, and kids will love adding up the different groups of fruit!
This worksheet provides students with the opportunity to work with groups and explore arrays. Students will create sentence frames and repeated addition equations based on pictures before writing their own creative story problems.
Early understanding of multiplication allows second graders to thrive as they progress through higher grade math lessons. Visual clues and step-by-step instructions give students second grade multiplication help to teach them how to not only go through the motions of multiplication, but also understand why what they are doing makes sense. Students interested in learning more about second grade multiplication may benefit from our third grade multiplication resources. |
Vitamin A and Measles
Back in the 1970s, Alfred Sommer, MD, MHS, began working on community interventions in Indonesia where children who were deficient in vitamin A were given a supplement to help stave off vision problems. At the time, the evidence was clear that vitamin A played an important role in vision, especially night vision. Children who were deficient in vitamin A just didn’t see very well in dark settings compared to children who were not deficient. As Dr. Sommer and his colleagues conducted the trial, they also made a very interesting observation.
Dr. Sommer and colleagues observed that children who were being given vitamin A supplements and those who were not deficient seemed to have better outcomes if they became ill with infectious diseases like measles or diarrhea. No, they were not immune to those diseases, but they fared better. Vitamin A deficient children died at a higher rate, it seemed. Something was going on.
To verify what was going on and make sure that it wasn’t some biased observation on their part, Dr. Sommer and colleagues conducted some larger clinical trials in the 1980s and early 1990s. They found that:
“Their work showed that ensuring adequate vitamin A intake can mitigate the effects of common diseases such as measles and diarrhea; reduce child mortality in at-risk populations by 23 to 34 percent to avert up to one million deaths a year; and prevent as many as 400,000 cases of childhood blindness each year.”
As a result, the World Health Organization made vitamin A supplementation for children all around the world a top priority. Further studies showed that a supplement costing just 2 or 3 cents per dose could prevent thousands of dollars in lost productivity by saving the lives of children.
Today, vitamin A deficiency is still a problem, especially in developing nations. And it’s not just vitamin A. Children in developing nations are also deficient in other nutrients, leaving them vulnerable to all sorts of complications. (There is malnutrition here in the United States, though it is more a result of empty calories and inner-city food deserts than structural problems at the national level in the delivery of food and healthcare.) Couple that with a lack of access to immunizations, and you have some very severe outcomes from diseases that we in the United States have long forgotten how deadly they could be.
Let us compare two places in the world where measles is active: Europe and Madagascar. Europe, for the most part, consists of several countries deemed to be industrialized and developed. While there is some variation between countries – and within countries – most Europeans enjoy a high standard of living. They have access to nutritious food and many enjoy the benefits of universal healthcare access. Europeans have also been contending with a resurgence of measles.
In 2018, European countries broke all sorts of recent records when it came to the number of cases of measles. Thirty-four of the 53 member states struggle to bring their immunization level to 95%, the level needed for herd immunity. However, there have been relatively few – if very tragic – deaths in Europe as a result of measles. The death rate from measles has been about the average for industrialized nations: 1 in 1,000.
Compare that epidemic in Europe to the one in Madagascar. According to the World Bank, more than half of Madagascar’s children (5 years of age and younger) are chronically malnourished. The country as a whole is very poor, with over 90% of the population living on less than 2 dollars a day. Combine that with a measles vaccine that costs about 15 dollars per dose, and is rarely available in the most impoverished areas, and you can understand what has happened there recently.
With more than two thirds of children unvaccinated against measles, and the last immunization drive more than 15 years ago, Madagascar is in the middle of one of biggest and worst measles epidemics in the world. With tens of thousands of cases, and almost 1,000 deaths, measles has taken out entire families throughout the country. The estimated mortality rate for measles there is closer to 15 per 1,000 cases, compared to 1 per 1,000 in industrialized nations.
Certainly, malnutrition, lack of access to care and lack of a vaccine have all come together to hurt and kill many children in Madagascar. There is no end in sight to the epidemic there since measles is endemic, meaning that it is constantly occurring with new generations of susceptible children being born each year. Unlike Madagascar, the epidemic in Europe (and the ones we have seen here in the United States) is driven by misinformation and fear of the measles vaccine. Parents are refusing to immunize their children in the developed nations. In the developing countries, like Madagascar, parents are begging for the vaccine to save the lives of their children. Quite a contrast.
Among the misinformation circulating in anti-vaccine circles in the United States is the falsehood that vitamin A somehow protects against measles, or that children who are getting measles in the United States are simply vitamin A deficient. Some anti-vaccine groups have even started crowdfunding efforts to send vitamin A supplements to Washington State, a state with over 520,000,000,000 dollars ($520 billion) of gross domestic product in 2017. (Madagascar’s GDP hovers around $10 billion.)
Needless to say, there is no scientific evidence that vitamin A supplementation prevents measles infection. It’s not even a scientific plausibility. However, vitamin A supplementation as an additional treatment of cases of measles in order to prevent death is recommended by some researchers, particularly in places where malnutrition is known to be a problem. The best prevention of measles continues to be the measles vaccine.
All scientific discoveries come with some level of apprehension, especially if the results are found to be revolutionary, like Dr. Sommer’s observation that vitamin A supplementation prevented deaths from infectious diseases. This is why the scientific method has been developed and refined, helping us confirm these observations and expand on them. As Dr. Sommer and his colleagues found, vitamin A prevents deaths from measles and other infectious diseases, but it does not prevent the infections themselves. For that, we have a vaccine with an excellent track record of safety. Unfortunately, some parents are forgoing all vaccines for their children based on unfounded and non-scientific claims. Other parents in developing nations simply lack the resources and access to the vaccine. As long as structural, societal and communication problems persist, it will not matter how good any vaccine is at preventing an infectious disease: someone, somewhere will be left out of enjoying the benefits of the vaccine, resulting in unnecessary disease and tragic loss of life.
- “The Story of Vitamin A” (https://www.jhsph.edu/news/stories/2003/sommer-vita.html)
- “Vitamin A Deficiency” (https://emedicine.medscape.com/article/126004-overview)
- “Vitamin A Treatment of Measles” (https://pediatrics.aappublications.org/content/91/5/1014) |
How Anthropologists Group the Early Hominids
By studying early hominids (large, bipedal primates) that date back to millions of years, anthropologists can track the development of the human race. When exploring anthropology, ‘keep these important points in mind:
The evolutionary process shapes species by replication, variation, and selection, leading to adaptation.
Humans are one of roughly 200 species of the Primate order, a biological group that’s been evolving for about 60 million years.
Hominids appear (only in Africa) by at least 4 million years ago with the following adaptive characteristics: bipedalism (habitually walking on two legs), encephalization (larger brains than expected for their body size), small teeth (smaller teeth than expected for their body size — the canines in particular).
The following table summarizes what anthropology has discovered about the main groups of early hominids.
|Hominid Group, Diet, and Tool Use||Some Genera and Species Included||Fossil Finds||Dates||Evolutionary Fate|
|Gracile australopithecines: omnivorous diet with little tool
|Australopithecus afarensis, Australopithecus
|A. afarensis in Ethiopia, and A. africanus at
many sites in South and East Africa
|Over 4 million years ago (A. afarensis) to about 2
million years ago (later A. africanus)
|A. afarensis probably ancestral to A. africanus;
A. africanus probably ancestral to early Homo
|Robust australopithecines: more herbivorous diet with little or
no tool use.
|Australopithecus aethiopicus, Australopithecus
|A. aethiopicus and A. boisei in East Africa,
A. robustus in South Africa
|Over 2 million years ago (A. aethipoicus)
to about 1 million years ago (late A. robustus)
|Extinction around 1 million years ago|
|Early Homo: omnivorous diet with more animal tissue
consumption and survival relying on tool use.
|Homo habilis, Homo rudolfensis, earliest Homo
|Olduvai Gorge, Tanzania and Koobi Fora, Kenya||Earliest Homo around 2.5 million years ago; clearly
H. erectus by 1.8 million years ago
|Evolved into H. erectus by 1.8 million years ago| |
Basic Guidelines For English SpellingsREAD THESE ARTICLES
(in Aristotelian thought) the matter or substance which constitutes a thing.
- ‘The four aspects are the formal cause, the material cause, the efficient cause, and the final cause.’
- ‘Furthermore, whatever there is must have pre-existed in its material cause, as material causes cannot create something other than what is there in the first place.’
- ‘One is the material cause or matter, the physical make-up of the thing, which puts considerable restrictions on what it can be and do.’
- ‘A cause in this sense has been traditionally called a material cause, although Aristotle himself did not use this label.’
- ‘The middle term, ‘made of bronze’, expresses the cause of the statue's being, for example, malleable; and because bronze is the constituent stuff of the statue the cause here is the material cause.’
Are You Learning English? Here Are Our Top English Tips |
It’s the twenty-first century—the Age of Computers. Most everyone has Wi-Fi and the ability to Google anything within minutes. Communication is at the tip of our fingers and what would a car ride be without Pandora? Technology plays a significant role in our personal lives as well as our professional lives and many of us would not know how to act if we did not have constant access to a computer or a smart phone.
So if technology is such a big part of our lives, why is it not a big part of the Montessori classroom? Well, the short answer is that it is. Miriam Webster defines technology as the practical application of knowledge, especially in a particular area. In the broad sense of the term, nearly everything we do in the classroom is technological. Students are introduced to a concept and then they are given the opportunity to apply it to something. In the classroom we typically refer to the practical application as follow up work. Follow up work may take the form of a research project, constructing a geometrical figure, or performing a science experiment.
When we talk about technology; however, we often mean the use of computers. Computers are a tool given to us by the technology industry allowing us to perform incredible tasks. They also allow us to do things exponentially faster than we could without them. The word computer was originally used to describe a human who could perform calculations or computations. The first machine that resembles the type of computers we use today was invented less than 100 years ago and weighed many tons. Now we have computers so small they fit in our pocket or on our wrist. They do much more than computations now and are able to perform many functions without human input.
A simple form of a computer is a handheld calculator. Scientific calculators can be very useful for performing higher level mathematics, but they have little use in the elementary years. Sure, a student could check her division with a calculator, but it takes about 5 seconds and almost no executive function skills. On the other hand, a student could check her division with multiplication. This exercise may take several minutes instead of 5 seconds, but the student had to employ her executive functions and got in a little multiplication practice while she was at it. Executive function is a set of mental skills performed in the prefrontal cortex of the brain that help us get things done and make good decisions. These include working memory, mental flexibility, and self-control.
Executive function skills must be taught, and that takes time. Every chance I get to instill these skills, I take it. Needless to say, we do not use calculators often.
Another example of a computer is, well, a computer. Personal computers have many uses, and for now, I will focus on those that do not involve the internet. Typing or keyboarding is one of the ways that we use computers in the Montessori classroom. Students may use the computer to type a final draft of a project. Limiting it to uses such as this allows the students to practice their fine motor skills through handwriting and allows them to edit their work without the crutch of a spell checker. Students may also use computers for tasks like creating tables, graphs, or spreadsheets. Like typing, these are skills that the student can learn once he has learned to do it on paper. This ensures that the students understands the concept of what they are trying to create, and can therefore focus their attention on attaining the computer skills they need to create it.
So what about the internet? Google makes finding information fast and easy and I do not know where I would be without it. Larry Page and Sergey Brin co-founded Google in 1998. They have often credited their success to their early Montessori education. Ironically, in Montessori classrooms students do not use Google to find the answers to their research questions. Typing in a question and getting an answer teaches students how to ask questions, not how to find answers. The co-founders of Google perfected their skills of finding answers in the Montessori classroom, which fosters self-motivation, questioning, and different ways of doing things.
Figuring out how to find the answers is just as important—if not more so—than actually finding them. I have watched students come up with research questions, take them to the computer, and come back 15 minutes later with all of the answers and no enthusiasm for what they have discovered. I have also witnessed students searching through books for over an hour (while complaining profusely) before finally finding that one fact they needed to complete the research. The reaction is completely different. There is nothing like the feeling of accomplishment after putting forth time and effort on something and that feeling simply does not come from Google. Perhaps you’re thinking that an hour is an awful lot of time wasted looking for a fact that the student could have found in the matter of minutes; however, consider what was going on in that hour. The student was attempting to solve a problem and succeeded. Not only will he probably remember the fact that he found, but he was building his executive function all the while. Not to mention, the student can now take pride in his work, because he knows he put forth substantial effort to find it.
We use books in our classroom all of the time, and students come to love them. We read for information, we read for fun, and we read often. Electronics and computers are captivating and they can be fun, but they dull the senses, do little to improve executive function, and limit social interaction. I encourage you to not only support our limited use of computers in the classroom, but to also limit your children’s usage at home. The American Academy of Pediatrics currently recommends limiting screen time to one hour for two-five year olds with no screen time at all for children under two. Television and video games can over stimulate young children and have been correlated with attention deficiencies. For children over five, the American Academy of Pediatrics recommends continuing to limit and monitor your child’s screen time while allowing them to have at least one hour of physical activity outside of school. Screen time is fun and candy is sweet, but attention spans and teeth are priceless. |
Problems can be enjoyed prior to as well as after winter break. Your students can solve real-world child-friendly problems through learning math. Students who have a variety of skill levels can successfully work through these word problems. This includes typical, advanced, and reluctant learners. This unit is a set of 20 constructed-response.
Word Problems - Grades 1 to 6 Hundreds of self-checking math word problems for students in grades 1 to 6. There are currently 675 word problems available. Absurd MathAbsurd Math is an interactive mathematical problem solving game series. The player proceeds on missions in a strange world where the ultimate power consists of mathematical skill.
These problem solving starter packs are great to support students with problem solving skills. I've used them this year for two out of four lessons each week, then used Numeracy Ninjas as starters for the other two lessons. When I first introduced the booklets, I encouraged my students to use scaffolds like those mentioned here, then gradually weaned them off the scaffolds.
Use problem solving skills in these math and science games with your favorite PBS KIDS characters Wild Kratts, WordGirl, Curious George, Sesame Street and the Cat in the Hat!
Winter Fractions Word Problems Get into the spirit of winter with these word problems related to fractions. Geared towards a fourth grade math curriculum, children will use multiplication, subtraction, and addition to solve six math fraction word problems.
Art of Problem Solving Academy (Locations Nationwide) website AwesomeMath website Bard Math Circle Creative and Analytical Math Program (CAMP) (Annandale-on-Hudson, NY), website.
This feature is somewhat larger than our usual features, but that is because it is packed with resources to help you develop a problem-solving approach to the teaching and learning of mathematics. Read Lynne's article which discusses the place of problem solving in the new curriculum and sets the scene.
Problem Solving Games These free maths problems activities are great for teaching and learning the skills needed to solve mathematical problems as they are engaging for young children. They lend themselves well to use with an interactive whiteboard where teachers can easily demonstrate strategies for solving problems which have different combinations of correct answers. |
6 Metamorphic Rocks
Contributing Author: Dr. Peter Davis, Pacific Lutheran University
- Describe the temperature and pressure conditions of the metamorphic environment
- Identify and describe the three principal metamorphic agents
- Describe what recrystallization is and how it affects mineral crystals
- Explain what foliation is and how it results from directed pressure and recrystallization
- Explain the relationships among slate, phyllite, schist, and gneiss in terms of metamorphic grade
- Define index mineral
- Explain how metamorphic facies relate to plate tectonic processes
- Describe what a contact aureole is and how contact metamorphism affects surrounding rock
- Describe the role of hydrothermal metamorphism in forming mineral deposits and ore bodies
Metamorphic rocks, meta- meaning change and –morphos meaning form, is one of the three rock categories in the rock cycle (see Chapter 1). Metamorphic rock material has been changed by temperature, pressure, and/or fluids.The rock cycle shows that both igneous and sedimentary rocks can become metamorphic rocks. And metamorphic rocks themselves can be re-metamorphosed. Because metamorphism is caused by plate tectonic motion, metamorphic rock provides geologists with a history book of how past tectonic processes shaped our planet .
6.1 Metamorphic Processes
Metamorphism occurs when solid rock changes in composition and/or texture without the mineral crystals melting, which is how igneous rock is generated. Metamorphic source rocks, the rocks that experience the metamorphism, are called the parent rock or protolith, from proto– meaning first, and lithos- meaning rock. Most metamorphic processes take place deep underground, inside the earth’s crust. During metamorphism, protolith chemistry is mildly changed by increased temperature (heat), a type of pressure called confining pressure, and/or chemically reactive fluids. Rock texture is changed by heat, confining pressure, and a type of pressure called directed stress.
6.1.1 Temperature (Heat)
Temperature measures a substance’s energy—an increase in temperature represents an increase in energy . Temperature changes affect the chemical equilibrium or cation balance in minerals. At high temperatures atoms may vibrate so vigorously they jump from one position to another within the crystal lattice, which remains intact. In other words, this atom swapping can happen while the rock is still solid.
The temperatures of metamorphic rock lies in between surficial processes (as in sedimentary rock) and magma in the rock cycle. Heat-driven metamorphism begins at temperatures as cold as 200˚C, and can continue to occur at temperatures as high as 700°C-1,100°C . Higher temperatures would create magma, and thus, would no longer be a metamorphic process. Temperature increases with increasing depth in the Earth along a geothermal gradient (see Chapter 4) and metamorphic rock records these depth-related temperature changes.
Pressure is the force exerted over a unit area on a material. Like heat, pressure can affect the chemical equilibrium of minerals in a rock. The pressure that affects metamorphic rocks can be grouped into confining pressure and directed stress. Stress is a scientific term indicating a force. Strain is the result of this stress, including metamorphic changes within minerals.
Pressure exerted on rocks under the surface is due to the simple fact that rocks lie on top of one another. When pressure is exerted from rocks above, it is balanced from below and sides, and is called confining or lithostatic pressure. Confining pressure has equal pressure on all sides (see figure) and is responsible for causing chemical reactions to occur just like heat. These chemical reactions will cause new minerals to form.
Confining pressure is measured in bars and ranges from 1 bar at sea level to around 10,000 bars at the base of the crust . For metamorphic rocks, pressures range from a relatively low-pressure of 3,000 bars around 50,000 bars , which occurs around 15-35 kilometers below the surface.
Directed stress, also called differential or tectonic stress, is an unequal balance of forces on a rock in one or more directions (see previous figure). Directed stresses are generated by the movement of lithospheric plates. Stress indicates a type of force acting on rock. Strain describes the resultant processes caused by stress and includes metamorphic changes in the minerals. In contrast to confining pressure, directed stress occurs at much lower pressures and does not generate chemical reactions that change mineral composition and atomic structure . Instead, directed stress modifies the parent rock at a mechanical level, changing the arrangement, size, and/or shape of the mineral crystals. These crystalline changes create identifying textures, which is shown in the figure below comparing the phaneritic texture of igneous granite with the foliated texture of metamorphic gneiss.
Directed stresses produce rock textures in many ways. Crystals are rotated, changing their orientation in space. Crystals can get fractured, reducing their grain size. Conversely, they may grow larger as atoms migrate. Crystal shapes also become deformed. These mechanical changes occur via recrystallization, which is when minerals dissolve from an area of rock experiencing high stress and precipitate or regrow in a location having lower stress. For example, recrystallization increases grain size much like adjacent soap bubbles coalesce to form larger ones. Recrystallization rearranges mineral crystals without fracturing the rock structure, deforming the rock like silly putty; these changes provide important clues to understanding the creation and movement of deep underground rock faults.
A third metamorphic agent is chemically reactive fluids that are expelled by crystallizing magma and created by metamorphic reactions. These reactive fluids are made of mostly water (H2O) and carbon dioxide (CO2), and smaller amounts of potassium (K), sodium (Na), iron (Fe), magnesium (Mg), calcium (Ca), and aluminum (Al). These fluids react with minerals in the protolith, changing its chemical equilibrium and mineral composition, in a process similar to the reactions driven by heat and pressure. In addition to using elements found in the protolith, the chemical reaction may incorporate substances contributed by the fluids to create new minerals. In general, this style of metamorphism, in which fluids play an important role, is called hydrothermal metamorphism or hydrothermal alteration. Water actively participates in chemical reactions and allows extra mobility of the components in hydrothermal alteration.
Fluids-activated metamorphism is frequently involved in creating economically important mineral deposits that are located next to igneous intrusions or magma bodies. For example, the mining districts in the Cottonwood Canyons and Mineral Basin of northern Utah produce valuable ores such as argentite (silver sulfide), galena (lead sulfide), and chalcopyrite (copper iron sulfide), as well as the native element gold . These mineral deposits were created from the interaction between a granitic intrusion called the Little Cottonwood Stock and country rock consisting of mostly limestone and dolostone. Hot, circulating fluids expelled by the crystallizing granite reacted with and dissolved the surrounding limestone and dolostone, precipitating out new minerals created by the chemical reaction. Hydrothermal alternation of mafic mantle rock, such as olivine and basalt, creates the metamorphic rock serpentinite, a member of the serpentine subgroup of minerals . This metamorphic process happens at mid-ocean spreading centers where newly formed oceanic crust interacts with seawater.
Some hydrothermal alterations remove elements from the parent rock rather than deposit them. This happens when seawater circulates down through fractures in the fresh, still-hot basalt, reacting with and removing mineral ions from it. The dissolved minerals are usually ions that do not fit snugly in the silicate crystal structure, such as copper. The mineral-laden water emerges from the sea floor via hydrothermal vents called black smokers, named after the dark-colored precipitates produced when the hot vent water meets cold seawater. (see Chapter 4, Igneous Rock and Volcanic Processes) Ancient black smokers were an important source of copper ore for the inhabitants of Cyprus (Cypriots) as early as 4,000 BCE, and later by the Romans .
6.2 Metamorphic textures
Metamorphic texture is the description of the shape and orientation of mineral grains in a metamorphic rock. Metamorphic rock textures are foliated, non-foliated, or lineated are described below.
6.2.1 Foliation and Lineation
Foliation is a term used that describes minerals lined up in planes. Certain minerals, most notably the mica group, are mostly thin and planar by default. Foliated rocks typically appear as if the minerals are stacked like pages of a book, thus the use of the term ‘folia’, like a leaf. Other minerals, with hornblende being a good example, are longer in one direction, linear like a pencil or a needle, rather than a planar-shaped book. These linear objects can also be aligned within a rock. This is referred to as a lineation. Linear crystals, such as hornblende, tourmaline, or stretched quartz grains, can be arranged as part of a foliation, a lineation, or foliation/lineation together. If they lie on a plane with mica, but with no common or preferred direction, this is foliation. If the minerals line up and point in a common direction, but with no planar fabric, this is lineation. When minerals lie on a plane AND point in a common direction; this is both foliation and lineation.
Foliated metamorphic rocks are named based on the style of their foliations. Each rock name has a specific texture that defines and distinguishes it, with their descriptions listed below.
Slate is a fine-grained metamorphic rock that exhibits a foliation called slaty cleavage that is the flat orientation of the small platy crystals of mica and chlorite forming perpendicular to the direction of stress. The minerals in slate are too small to see with the unaided eye. The thin layers in slate may resemble sedimentary bedding, but they are a result of directed stress and may lie at angles to the original strata. In fact, original sedimentary layering may be partially or completely obscured by the foliation. Thin slabs of slate are often used as a building material for roofs and tiles.
Phyllite is a foliated metamorphic rock in which platy minerals have grown larger and the surface of the foliation shows a sheen from light reflecting from the grains, perhaps even a wavy appearance, called crenulations. Similar to phyllite but with even larger grains is the foliated metamorphic rock schist, which has large platy grains visible as individual crystals. Common minerals are muscovite, biotite, and porphyroblasts of garnets. A porphyroblast is a large crystal of a particular mineral surrounded by small grains. Schistosity is a textural description of foliation created by the parallel alignment of platy visible grains. Some schists are named for their minerals such as mica schist (mostly micas), garnet schist (mica schist with garnets), and staurolite schist (mica schists with staurolite).
Gneissic banding is a metamorphic foliation in which visible silicate minerals separate into dark and light bands or lineations. These grains tend to be coarse and often folded. A rock with this texture is called gneiss. Since gneisses form at the highest temperatures and pressures, some partial melting may occur. This partially melted rock is a transition between metamorphic and igneous rocks called a migmatite .
Migmatites appear as dark and light banded gneiss that may be swirled or twisted some since some minerals started to melt. Thin accumulations of light colored rock layers can occur in a darker rock that are parallel to each other, or even cut across the gneissic foliation. The lighter colored layers are interpreted to be the result of the separation of a felsic igneous melt from the adjacent highly metamorphosed darker layers, or injection of a felsic melt from some distance away.
Non-foliated textures do not have lineations, foliations, or other alignments of mineral grains. Non-foliated metamorphic rocks are typically composed of just one mineral, and therefore, usually show the effects of metamorphism with recrystallization in which crystals grow together, but with no preferred direction. The two most common examples of non-foliated rocks are quartzite and marble. Quartzite is a metamorphic rock from the protolith sandstone. In quartzites, the quartz grains from the original sandstone are enlarged and interlocked by recrystallization. A defining characteristic for distinguishing quartzite from sandstone is that when broken with a rock hammer, the quartz crystals break across the grains. In a sandstone, only a thin mineral cement holds the grains together, meaning that a broken piece of sandstone will leave the grains intact. Because most sandstones are rich in quartz, and quartz is a mechanically and chemically durable substance, quartzite is very hard and resistant to weathering.
Marble is metamorphosed limestone (or dolostone) composed of calcite (or dolomite). Recrystallization typically generates larger interlocking crystals of calcite or dolomite. Marble and quartzite often look similar, but these minerals are considerably softer than quartz. Another way to distinguish marble from quartzite is with a drop of dilute hydrochloric acid. Marble will effervesce (fizz) if it is made of calcite.
A third non-foliated rock is hornfels identified by its dense, fine grained, hard, blocky or splintery texture composed of several silicate minerals. Crystals in hornfels grow smaller with metamorphism, and become so small that specialized study is required to identify them. These are common around intrusive igneous bodies and are hard to identify. The protolith of hornfels can be even harder to distinguish, which can be anything from mudstone to basalt.
6.3 Metamorphic Grade
Metamorphic grade refers to the range of metamorphic change a rock undergoes, progressing from low (little metamorphic change) grade to high (significant metamorphic change) grade. Low-grade metamorphism begins at temperatures and pressures just above sedimentary rock conditions. The sequence slate→phyllite→schist→gneiss illustrates an increasing metamorphic grade.
Geologists use index minerals that form at certain temperatures and pressures to identify metamorphic grade. These index minerals also provide important clues to a rock’s sedimentary protolith and the metamorphic conditions that created it. Chlorite, muscovite, biotite, garnet, and staurolite are index minerals representing a respective sequence of low-to-high grade rock. The figure shows a phase diagram of three index minerals—sillimanite, kyanite, and andalusite—with the same chemical formula (Al2SiO5) but having different crystal structures (polymorphism) created by different pressure and temperature conditions.
Some metamorphic rocks are named based on the highest grade of index mineral present. Chlorite schist includes the low-grade index mineral chlorite. Muscovite schist contains the slightly higher grade muscovite, indicating a greater degree of metamorphism. Garnet schist includes the high grade index mineral garnet, and indicating it has experienced much higher pressures and temperatures than chlorite.
6.4 Metamorphic Environments
As with igneous processes, metamorphic rocks form at different zones of pressure (depth) and temperature as shown on the pressure-temperature (P-T) diagram. The term facies is an objective description of a rock. In metamorphic rocks facies are groups of minerals called mineral assemblages. The names of metamorphic facies on the pressure-temperature diagram reflect minerals and mineral assemblages that are stable at these pressures and temperatures and provide information about the metamorphic processes that have affected the rocks. This is useful when interpreting the history of a metamorphic rock.
In the late 1800s, British geologist George Barrow mapped zones of index minerals in different metamorphic zones of an area that underwent regional metamorphism. Barrow outlined a progression of index minerals, named the Barrovian Sequence, that represents increasing metamorphic grade: chlorite (slates and phyllites) -> biotite (phyllites and schists) -> garnet (schists) -> staurolite (schists) -> kyanite (schists) -> sillimanite (schists and gneisses).
The first of the Barrovian sequence has a mineral group that is commonly found in the metamorphic greenschist facies. Greenschist rocks form under relatively low pressure and temperatures and represent the fringes of regional metamorphism. The “green” part of the name is derived from green minerals like chlorite, serpentine, and epidote, and the “schist” part is applied due to the presence of platy minerals such as muscovite.
Many different styles of metamorphic facies are recognized, tied to different geologic and tectonic processes. Recognizing these facies is the most direct way to interpret the metamorphic history of a rock. A simplified list of major metamorphic facies is given below.
6.4.1 Burial Metamorphism
Burial metamorphism occurs when rocks are deeply buried, at depths of more than 2000 meters (1.24 miles) . Burial metamorphism commonly occurs in sedimentary basins, where rocks are buried deeply by overlying sediments. As an extension of diagenesis, a process that occurs during lithification (Chapter 5), burial metamorphism can cause clay minerals, such as smectite, in shales to change to another clay mineral illite. Or it can cause quartz sandstone to metamorphose into the quartzite such the Big Cottonwood Formation in the Wasatch Range of Utah. This formation was deposited as ancient near-shore sands in the late Proterozoic (see Chapter 7), deeply buried and metamorphosed to quartzite, folded, and later exposed at the surface in the Wasatch Range today. Increase of temperature with depth in combination with an increase of confining pressure produces low-grade metamorphic rocks with a mineral assemblages indicative of a zeolite facies .
6.4.2 Contact Metamorphism
Contact metamorphism occurs in rock exposed to high temperature and low pressure, as might happen when hot magma intrudes into or lava flows over pre-existing protolith. This combination of high temperature and low pressure produces numerous metamorphic facies. The lowest pressure conditions produce hornfels facies, while higher pressure creates greenschist, amphibolite, or granulite facies.
As with all metamorphic rock, the parent rock texture and chemistry are major factors in determining the final outcome of the metamorphic process, including what index minerals are present. Fine-grained shale and basalt, which happen to be chemically similar, characteristically recrystallize to produce hornfels. Sandstone (silica) surrounding an igneous intrusion becomes quartzite via contact metamorphism, and limestone (carbonate) becomes marble.
When contact metamorphism occurs deeper in the Earth, metamorphism can be seen as rings of facies around the intrusion, resulting in aureoles. These differences in metamorphism appear as distinct bands surrounding the intrusion, as can be seen around the Alta Stock in Little Cottonwood Canyon, Utah. The Alta Stock is a granite intrusion surrounded first by rings of the index minerals amphibole (tremolite) and olivine (forsterite), with a ring of talc (dolostone) located further away .
6.4.3 Regional Metamorphism
Regional metamorphism occurs when parent rock is subjected to increased temperature and pressure over a large area, and is often located in mountain ranges created by converging continental crustal plates. This is the setting for the Barrovian sequence of rock facies, with the lowest grade of metamorphism occurring on the flanks of the mountains and highest grade near the core of the mountain range, closest to the convergent boundary.
An example of an old regional metamorphic environment is visible in the northern Appalachian Mountains while driving east from New York state through Vermont and into New Hampshire. Along this route the degree of metamorphism gradually increases from sedimentary parent rock, to low-grade metamorphic rock, then higher-grade metamorphic rock, and eventually the igneous core. The rock sequence is sedimentary rock, slate, phyllite, schist, gneiss, migmatite, and granite. In fact, New Hampshire is nicknamed the Granite State. The reverse sequence can be seen heading east, from eastern New Hampshire to the coast .
6.4.4 Subduction Zone Metamorphism
Subduction zone metamorphism is a type of regional metamorphism that occurs when a slab of oceanic crust is subducted under continental crust (see Chapter 2). Because rock is a good insulator, the temperature of the descending oceanic slab increases slowly relative to the more rapidly increasing pressure, creating a metamorphic environment of high pressure and low temperature. Glaucophane, which has a distinctive blue color, is an index mineral found in blueschist facies (see metamorphic facies diagram). The California Coast Range near San Francisco has blueschist-facies rocks created by subduction-zone metamorphism, which include rocks made of blueschist, greenstone, and red chert. Greenstone, which is metamorphized basalt, gets its color from the index mineral chlorite .
6.4.5 Fault Metamorphism
There are a range of metamorphic rocks made along faults. Near the surface, rocks are involved in repeated brittle faulting produce a material called rock flour, which is rock ground up to the particle size of flour used for food. At lower depths, faulting create cataclastites , chaotically-crushed mixes of rock material with little internal texture. At depths below cataclasites, where strain becomes ductile, mylonites are formed. Mylonites are metamorphic rocks created by dynamic recrystallization through directed shear forces, generally resulting in a reduction of grain size . When larger, stronger crystals (like feldspar, quartz, garnet) embedded in a metamorphic matrix are sheared into an asymmetrical eye-shaped crystal, an augen is formed .
6.4.6 Shock Metamorphism
Shock (also known as impact) metamorphism is metamorphism resulting from meteor or other bolide impacts, or from a similar high-pressure shock event. Shock metamorphism is the result of very high pressures (and higher, but less extreme temperatures) delivered relatively rapidly. Shock metamorphism produces planar deformation features, tektites, shatter cones, and quartz polymorphs. Shock metamorphism produces planar deformation features (shock laminae), which are narrow planes of glassy material with distinct orientations found in silicate mineral grains. Shocked quartz has planar deformation features .
Shatter cones are cone-shaped pieces of rock created by dynamic branching fractures caused by impacts . While not strictly a metamorphic structure, they are common around shock metamorphism. Their diameter can range from microscopic to several meters. Fine-grained rocks with shatter cones show a distinctive horsetail pattern.
Shock metamorphism can also produce index minerals, though they are typically only found via microscopic analysis. The quartz polymorphs coesite and stishovite are indicative of impact metamorphism . As discussed in chapter 3, polymorphs are minerals with the same composition but different crystal structures. Intense pressure (> 10 GPa) and moderate to high temperatures (700-1200 °C) are required to form these minerals.
Shock metamorphism can also produce glass. Tektites are gravel-size glass grains ejected during an impact event. They resemble volcanic glass but, unlike volcanic glass, tektites contain no water or phenocrysts, and have a different bulk and isotopic chemistry. Tektites contain partially melted inclusions of shocked mineral grains . Although all are melt glasses, tektites are also chemically distinct from trinitite, which is produced from thermonuclear detonations , and fulgurites, which are produced by lightning strikes . All geologic glasses not derived from volcanoes can be called with the general term pseudotachylytes , a name which can also be applied to glasses created by faulting. The term pseudo in this context means ‘false’ or ‘in the appearance of’, a volcanic rock called tachylite because the material observed looks like a volcanic rock, but is produced by significant shear heating.
Metamorphism is the process that changes existing rocks (called protoliths) into new rocks with new minerals and new textures. Increases in temperature and pressure are the main causes of metamorphism, with fluids adding important mobilization of materials. The primary way metamorphic rocks are identified is with texture. Foliated textures come from platy minerals forming planes in a rock, while non-foliated metamorphic rocks have no internal fabric. Grade describes the amount of metamorphism in a rock, and facies are a set of minerals that can help guide an observer to an interpretation of the metamorphic history of a rock. Different tectonic or geologic environments cause metamorphism, including collisions, subduction, faulting, and even impacts from space. |
Loneliness: What it Means for Individuals with Disabilities and How to Help
Today, in the midst of the Coronavirus pandemic, people are experiencing more alone time than ever. We’re encouraged to physically distance ourselves from one another. Businesses deemed non-essential have closed until further notice, and we can’t even visit with friends and family outside of our immediate households.
While some of us thrive with ample alone time, involuntary self isolation––and too much of it––can lead to damaging short and long term side effects. Alone time that evolves into loneliness can result in us feeling “less than human” and promote negative feelings.
These side effects are magnified for individuals with intellectual and developmental disabilities (IDD). Because of their limited opportunities to engage in social and emotional relationships, people with IDD often report higher levels of loneliness than people without.
Research on individuals with disabilities and isolation reveals that:
- Of young adults with developmental disabilities, 85% say they feel lonely most days.
- Of the 87% of autistic adults who live with their parents, only 22% want to.
- Individuals with IDD have an average of 3.1 people in their social network versus 125 social network members observed in the general population.
- Marriages occur less frequently than in the general population, and individuals with severe intellectual disability rarely marry.
- Mental health disorders such as depression and anxiety may be triggered or worsened by loneliness.
- Loneliness and social isolation reduces life span as much as smoking 15 cigarettes a day.
Social Media Could be One Solution to Reduce Loneliness
With Facebook, Instagram, WhatsApp, and other social media platforms, individuals can make friends all over the world. These online interactions help them develop social skills applicable for situations they may encounter in their own physical environment, further moving them toward greater independence. Individuals who develop social skills feel more confident in engaging with others, and are more apt to be involved in the community.
In addition to social media platforms, individuals may also explore YouTube to expand their horizons. YouTube is a free service and can be a great avenue for discovering music videos, how-to-guides, crafts, recipes, exercises, and more. YouTube, along with many other social platforms are increasing measures to protect vulnerable persons including individuals with intellectual and developmental disabilities.
For more information, check out our post, How Social Media can Reduce Feelings of Isolation in Individuals with Disabilities for tips on safety and having a successful social media presence.
Social Integration: Mending the Disconnect
Unfortunately, emotional isolation and social isolation go hand in hand. People who feel they do not belong in a social group––due to race, class, disabilities, and other criteria––experience social isolation. Because of this, these same individuals often have no intimate, reciprocal relationships, which leads to them experiencing emotional isolation, and ultimately, loneliness.
In a recent review exploring concepts relevant to loneliness, loneliness was broken down to comprise three components (Wang et al., 2016). According to this study, loneliness can be understood as:
- A painful experience that arises when there is a discrepancy between the individual’s expectations concerning relationships and his/her actual experience
- The perception that an individual’s social and emotional needs are not being met by the quantity and quality of a social relationship
- Multidimensional in nature, consisting of both a social and an emotional dimension.
Loneliness is the result of a disconnect. A disconnect between the individual’s expectations and reality, a disconnect between what is and isn’t being met to fulfill emotional needs, and being disconnected from the community on a social and emotional level.
Social integration, therefore, is critical to mend this disconnect. This means individuals with disabilities interacting with other people in their community who may or may not have disabilities. This community integration leads to improvements in quantity and quality of relationships, which reduces loneliness.
Covey’s Role in Battling Loneliness
The most promising way to promote social interactions is through social skill and support-based interventions. Covey is an important resource for individuals with disabilities and their families. Located in Oshkosh and Appleton, Wisconsin, our caring staff is all about empowering our clients to strive toward greater independence and live fulfilling lives. We approach challenges with creativity and compassion, and are invested in helping individuals achieve their potential.
Our clients, their families, and our community all benefit from our services. We offer social development programs and where our clients learn important social skills to empower them to become engaged members of our community. Our respite program is unique and customized to each individual. We work diligently to create an environment where everyone feels valued and accepted, and where clients develop essential relationships with their peers, staff, and volunteers.
We also actively volunteer in our community. Several times a week, Covey participants lend helping hands––and hearts––to animal shelters, assisted living centers, secondhand shops, and more. |
WHAT IS SALMONELLA?
Salmonella are a group of bacteria that can cause diarrheal illness in people. This constitutes a major public health burden and represents a significant cost to society in many countries. One species, Salmonella enterica has more than 2,000 serovars. Salmonella enterica ser.Typhimurium and Salmonella Enteritidis most commonly encountered globally. Salmonella are inhabitants of the feces of many types of animals. According to a Centers for Disease Control and Prevention estimate, 1.0 million cases of foodborne salmonellosis occur each year in the United States.1 Salm-Surv is a global effort devoted to foodborne disease surveillance and outbreak detection; Salmonella is its original focus because of the significant role this organism plays in foodborne disease worldwide.2
WHAT ARE THE SYMPTOMS?
The disease caused by Salmonella is generally called salmonellosis. Symptoms of salmonellosis typically appear 12 to 36 hours after contaminated food is eaten, and last for one to four days. Symptoms include diarrhea, abdominal pain, chills, fever, vomiting, dehydration and headache. In some cases, individuals recovering from salmonellosis may continue to shed Salmonella in their feces for weeks to months after symptoms have disappeared.
HOW IS IT TRANSMITTED?
The primary reservoirs of Salmonella are the intestinal tracts of infected domestic and wild animals, thus foods of animal origin such as poultry, eggs, beef and pork are often sources. People can also carry Salmonella in their intestinal tracts. Salmonella is passed in the feces and can remain alive outside of the host animal for possibly years. Many foods, including produce, can become contaminated by the unclean hands of a food handler, by cross-contamination during preparation, or by irrigation or preparation of foods with contaminated water. Contaminated ice has also caused salmonellosis outbreaks. Other foods identified as vehicles for Salmonella transmission include coconut, chocolate, peanut butter, yeast, and soy. Flour may be contaminated with Salmonella; however, proper cooking inactivates the organism.
Outbreaks have occurred from foods contaminated with just a few cells, especially when the cells are protected in the digestive tract by high levels of fat in the food. Other foods involved in outbreaks may require higher contamination rates to result in illness. The infectious dose depends on the age and health of the host, strain differences among the members of the genus, and protective effects of the food.
A large Salmonella outbreak occurred in 2008 which affected 1442 patients, was epidemiologically associated with raw produce including tomatoes and Serrano and jalapeno peppers. Traceback connected this outbreak to a distributor and 2 farms. The contamination mechanism was not determined.3
The largest shell egg recall in US history occurred in 2010, when shell eggs from 2 production facilities were linked epidemiologically to more than 1470 cases of Salmonella Enteritidis (SE) illness. Environmental samples at the producers tested positive indicating that feed might be a source.4 SE can be found inside an infected egg, thus washing eggs will not eliminate the hazard.
HOW IS IT CONTROLLED?
Control of Salmonella focuses on adequate cooking of potentially contaminated foods. Cross-contamination control is also essential for cooked and ready-to-eat foods. Sanitary practices and adequate hand washing are critical in this area. Good Agricultural Practices are also essential for produce safety. The organism can grow over a temperature range of 7-46°C, water activity as low as 0.94, and pH from 4.4-9.4.5
In conjunction with the SE outbreak in shell eggs, the CDC issued recommendations6for handling eggs that included refrigerating eggs at ≤ 45°F, cooking until the white and yolk are firm, promptly refrigerating cooked eggs, performing proper sanitation of hands, utensils and food preparation surfaces, not consuming raw eggs and using pasteurized eggs where possible. Additionally, as part of a cooperative venture between the FDA and USDA, an egg safety rule7 was issued on July 9, 2010, which includes flock-based control programs and routine microbiological testing. Education efforts on the requirements continue.
For assistance with this topic or other food safety questions for your operation, please Contact Us.
REFERENCES AND FURTHER INFORMATION
1Scallan E, Hoekstra RM, Angulo FJ, Tauxe RV, Widdowson M-A, Roy SL, et al. Foodborne illness acquired in the United States—major pathogens. Emerg Infect Dis. 2011 Jan. http://wwwnc.cdc.gov/eid/article/17/1/p1-1101_article.htm
2World Health Organization Global Salm-Surv Progress Report 2000-2005. (1.84 MB)
3Salmonella outbreak attributed to multiple raw produce items, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5734a1.htm?s_cid=mm5734a1_e
4Multistate Outbreak of Salmonella Enteritidis Associated with Shell Eggs. http://www.cdc.gov/salmonella/enteritidis/
5International Commission on Microbiological Specifications of Foods. Microorganisms in Foods 5, Microbiological Specifications of Food Pathogens. Blackie Academic and Professional, New York. 1996.
6Egg safety information for consumers: http://www.fda.gov/Food/ResourcesForYou/Consumers/ucm077342.htm
7Egg Safety Final Rule |
Chiara Villani, Agrobiodiversity Index Project Coordinator, and Sarah Jones, Associate Scientist for Sustainable Agricultural Production, at the Alliance of Bioversity International and the International Center for Tropical Agriculture, explore why increasing agrobiodiversity in diet, markets, production systems and genetic resources is key to developing sustainable food systems.
In 1845, the Irish Potato Famine took hold, leading to the deaths of one million people and the mass emigration of a million more in the years that followed. The root of the problem was a fungal disease, which resulted in a devastating shortage of Ireland’s staple crop and a nationwide population decline of around a quarter.
More than a century later, in the 1950s and 1960s, Gros Michel bananas were decimated throughout the tropics by another fungal disease – Panama disease, also referred to as Fusarium wilt. The economic damage was huge. Export trade declined and producers were forced to switch to a disease-resistant variety, the Cavendish banana.
Today’s dominant crops are so for a reason: they tend to be highly productive, hardy or easy for farmers to grow. However, reliance on a single crop type can pose risks. If growing conditions change, these dominant crops may become less well-suited, for instance from the impacts of climate change or due to pest and disease outbreaks.
Agricultural biodiversity – or agrobiodiversity – helps increase the resilience of farmers’ livelihoods, improve people’s diets, manage landscapes more sustainably and safeguard crop breeding efforts into the future.
Why agrobiodiversity matters
Agrobiodiversity refers to both domesticated and wild species that contribute to food production. It underpins nutritious diets and human health as well as helping conserve ecosystems by increasing the abundance of pollinators such as bees and birds, and keeping soils healthy.
Food systems are comprised of many different actors including consumers, farmers, agri-food companies, local and national governments, and development agencies. All have diverse needs and concerns, which must be addressed if agrobiodiversity is to be embraced and more sustainable food systems established.
Reaping rewards throughout the value chain
For farmers, on-farm conservation is one key way to maintain or build agrobiodiversity. This approach helps farmers to cope with stresses like climate change, by supporting them to develop new portfolios of adaptive traits. Making the most of local knowledge and natural resources are just some of the conservation methods that farmers can use to improve genetic diversity.
For companies, building reliable supply chains and making a profit are essential. They therefore need to be able to manage risks and align to changing consumer priorities, which are shifting towards healthier and more environmentally sustainable products. Farmers care about decreasing their exposure to risks, including climate change, pests and diseases. Countries are concerned with addressing today’s global challenges, including malnutrition, poverty and climate change. Development agencies are preoccupied with formulating projects that have a positive impact on the health of people and the planet.
For food consumers, more diverse crops help contribute to more nutritious diets. With global hunger rising and a third of the developing world’s population suffering from poor nutrition, diversity in agriculture is essential to improve food security and end severe malnutrition across the globe.
Looking to the future
Conserving genetic diversity is also important for supporting future breeding efforts. One crucial way to safeguard and improve access to genetic resources is through genebanks.
By conserving crop wild relatives, related to the staple foods we eat, useful traits can be preserved to help scientists develop more resilient or disease-resistant crop varieties. For example, the International Musa Transit Centre preserves the world’s largest collection of banana diversity, with more than 1,500 samples of edible and wild species of banana. This means that if conditions change and another event like the Panama disease outbreak occurs, genebanks can equip researchers to develop new, fortified strains of banana crops.
Improving access to agrobiodiversity data
Recognition of these different approaches and priorities is central to the Agrobiodiversity Index, a tool focused on improving access to data and knowledge in order to encourage better use and conservation of agrobiodiversity at all levels – from individual to global. In our era of information overload, food producers, consumers and governments alike need reliable data for decision-making, that cuts through complexity to find evidence-based solutions. They also need support to understand how they are doing – whether they are making progress, for instance, or how they compare with their peers.
By detecting agrobiodiversity-related risks and opportunities for more sustainable and resilient food production and markets now and into the future, the Agrobiodiversity Index serves as a valuable tool for those involved in managing food systems. It helps them understand why they should care about agrobiodiversity and what they can do to improve it.
Through embracing solutions that enhance the use and conservation of agrobiodiversity, food systems can be made more sustainable and deliver better food security and nutrition to people around the world. |
Electron Affinities of the Main-Group Elements*
The electron affinity is a measure of the energy change when an electron is added to a neutral atom to form a negative ion. For example, when a neutral chlorine atom in the gaseous form picks up an electron to form a Cl- ion, it releases an energy of 349 kJ/mol or 3.6 eV/atom. It is said to have an electron affinity of -349 kJ/mol and this large number indicates that it forms a stable negative ion. Small numbers indicate that a less stable negative ion is formed. Groups VIA and VIIA in the periodic table have the largest electron affinities.
* Alkali earth elements (Group IIA) and noble gases (Group VIIIA) do not form stable negative ions. |
Biology, the study of living things, is a broad and enticing area in which students of all levels can create original, exciting and instructive science fair projects. Evolution, botany, zoology, taxonomy and a host of other interconnected disciplines offer rich opportunities for kids from elementary school through high school to engage their peers in biology topics and further their own learning, perhaps as part of a journey toward becoming scientists themselves.
Tree Types and Identification
Most young children are aware that not all trees around them look alike and not just because they are different sizes. Why do some trees change colors? Why do some have needles rather than leaves? What are pine cones, and what do they do? Elementary school kids can tackle the basics by setting up a display that distinguishes evergreens (coniferous trees) from leafy trees (deciduous trees). Then they can add basic facts about where these trees most easily grow, which animals live in or near them, and how the parts of the country or world in which they flourish relate to their shape. Youngsters like superlatives, so including a photo of the tallest or widest trees in the world would spark interest in further botanical study in classrooms.
Bacteria are everywhere in our lives. Some of them notably harmful, while others are vital to our day-to-day existence. A middle-school biology science fair experiment that involves the growing of bacterial colonies offers a display of the impressive reproductive rates of microbial life and also highlights the fact that bacteria are found virtually everywhere other life forms exist. Students can easily grow bacterial colonies in petri-dish media. Then, they can supplement these with informational cards concerning the basic structure and function of cells as well as explanations as to why some bacteria are good for humans (such as species found on the skin and in the intestines) when others cause disease and debility.
Ecological matters have intensified as a public concern since the latter part of the 20th century. Constructing food webs (or food chains, as they are often called) for a given ecosystem can highlight the elegance and delicate nature of the interplay between plants, different animal species and climate in a particular area or region. High-school biology science fair students can be encouraged to create a basic food web relating to wildlife in the state or general geographic region where they attend school, with information about the overall health of the primary species added to emphasize the strong interdependence among seemingly unrelated creatures and flora in maintaining a given ecosystem's balance. |
Airplane accidents are especially dangerous because jet fuel is highly flammable under crash conditions. On impact, jet fuel is dispersed in the air as a fine mist, which triggers a sequence of events that can lead to a fire engulfing an entire plane.
Researchers at the California Institute of Technology and the Jet Propulsion Laboratory, which is managed by Caltech, have been working on additives that inhibit the formation of this highly flammable mist during collisions. These additives are based on long molecules called polymers.
"This research is about making fuel safer and saving lives," said Project Manager Virendra Sarohia, based at JPL.
A new Caltech-led study in the journal Science describes polymers that could increase the safety of jet fuel and diesel fuel, particularly in the event of collision or a deliberate attempt to create a fuel explosion as part of a terrorist attack.
"The new polymers could reduce the intensity of post-crash fires, providing time for more passengers to escape," said Julia Kornfield, a professor of chemical engineering at Caltech who mentored Ming-Hsin Wei, Boyu Li and Ameri David. Their doctoral research is presented in the study.
Fuel misting also happens in jet engines under normal operations. The engine repeatedly ignites a combination of a spray of fuel and compressed air, and this process thrusts the plane forward. The problem arises when a fuel mist is created outside the engine. For example when a plane crashes, the entire volume of fuel could be involved in misting.
"Once we control the mist in a crash, this aviation fuel is hard to ignite," said Sarohia, who collaborated with JPL technologist Simon Jones. "It allows time to fight fires and time to evacuate people from the accident."
Various tests have been conducted in relation to the new study. Impact tests using jet fuel show that the polymers reduce flame propagation in the resulting mist. In other tests, the polymers showed no adverse effects on diesel engine operation, researchers say.
Larger-scale production is needed to provide enough polymer for jet engine tests.
"Years of testing are required to achieve FAA approval for use in jet fuel, so the polymer might be used first to reduce post-crash fires on roadways," Kornfield said.
How the Polymers Work
A polymer is a large molecule that has regularly repeated units. The new technology consists of polymer chains that are able to reversibly link together through chemical groups on their ends that stick together like Velcro. If you link these polymers end-to-end, very large chains form, which the study authors call "mega-supramolecules."
"Our polymers have backbones that, like fuel, have just carbon and hydrogen, but they are much, much longer. Typically our polymers have 50,000 carbon atoms in the backbone," said Kornfield.
"Such long polymers, specially constructed for a fuel additive, are unprecedented. Many years of laboratory effort have gone into the design of their structure and the development of careful methods for their synthesis," said Jones.
Sarohia likens the mechanism of the fuel additive to the clotting of blood. While blood is in the veins, it should flow freely; clotting in the veins could be fatal. But blood is supposed to clot when it gets to the surface of skin, so that a person doesn't bleed out. Similarly, the jet fuel with the polymer added should flow normally during routine operation of the aircraft; it's only during a collision that it should act to control the mist.
Sarohia has been working on this research since the 1970s. The Tenerife Airport disaster in the Canary Islands in 1977, in which 583 passengers aboard two planes were killed in a runway collision, demonstrated the need for safer jet fuel. An international collaboration resulted in successful sled-driven plane crash tests of a fuel additive in the early 1980s.
But the analyses of a 1984 full-scale impact test in California's Mojave Desert were mixed. There was no more activity in the research program for more than a decade.
It looked as though the program had ended for good. But Sarohia remembers that after the Sept. 11, 2001, attacks on the World Trade Center, his daughter asked him, "Where's your fuel?" That got him thinking about the polymer again.
Not long afterwards, Sarohia received the support of JPL to restart the investigation of a polymer to control fuel mist. In 2003, Sarohia and colleagues demonstrated in tests at China Lake, California, that the polymer could be effective even at 500 mph impact speeds. The results provided the impetus for the Caltech-JPL collaboration.
The fuel additive tested in the 1980s consisted of ultralong polymers that interfered with engine operation. Therefore each and every aircraft would need to be retrofitted with a device called a "degrader" to break the polymers into small segments just before injection in the engine. However, the new polymers can release their end associations during fuel-injection and disperse into smaller units that are compatible with engine operation.
"The hope is that it will not require the modification of the engine," Sarohia said.
Long-haul diesel engine tests also show that the polymer has the potential to reduce emissions of particulate matter by controlling the fuel droplet size. These megasupramolecules may also reduce resistance to flow through pipelines. Ongoing research is establishing methods to produce the larger quantities of the polymer required to explore these opportunities.
The Science study was funded by the U.S. Army Tank Automotive Research Development and Engineering Center, the Federal Aviation Administration, the Schlumberger Foundation, and the Gates Grubstake Fund.
News Media ContactElizabeth Landau
NASA's Jet Propulsion Laboratory, Pasadena, Calif. |
Toggle: English / Spanish
Meningococcemia is an acute and potentially life-threatening infection of the bloodstream.
Meningococcal septicemia; Meningococcal blood poisoning; Meningococcal bacteremia
Meningococcemia is caused by bacteria called Neisseria meningitidis. The bacteria often live in a person's upper respiratory tract without causing visible signs of illness. They can be spread from person to person through respiratory droplets. For example, you may become infected if you are around someone with the condition and they sneeze or cough.
Family members and those closely exposed to someone with the condition are at increased risk. The infection occurs more often in winter and early spring.
There may be few symptoms at first. Some may include:
- Muscle pain
- Rash with red or purple spots
Later symptoms may include:
- A decline in your level of consciousness
- Large areas of bleeding under the skin
Exams and Tests
Blood tests will be done to rule out other infections and help confirm meningococcemia. Such tests may include:
Other tests that may be done include:
Meningococcemia is a medical emergency. People with this infection are often admitted to the intensive care unit of the hospital, where they are closely monitored. They may be placed in respiratory isolation for the first 24 hours to help prevent the spread of the infection to others.
Treatments may include:
- Antibiotics given through a vein immediately
- Breathing support
- Clotting factors or platelet replacement, if bleeding disorders develop
- Fluids through a vein
- Medicines to treat low blood pressure
- Wound care for areas of skin with blood clots
Early treatment results in a good outcome. When shock develops, the outcome is less certain.
The condition is most life-threatening in those who have:
Possible complications of this infection are:
- Bleeding disorder (DIC)
- Gangrene due to lack of blood supply
- Inflammation of blood vessels in the skin
- Inflammation of the heart muscle
- Inflammation of the heart lining
- Severe damage to adrenal glands that can lead to low blood pressure (Waterhouse-Friderichsen syndrome)
When to Contact a Medical Professional
Go to the emergency room immediately if you have symptoms of meningococcemia. Call your health care provider if you have been around someone with the disease.
Preventive antibiotics for family members and contacts are often recommended. Speak with your health care provider about this option.
A vaccine that covers some, but not all, strains of meningococcus is recommended for children age 11 or 12. A booster is given at age 16. Unvaccinated college students who live in dormitories should also consider receiving this vaccine. It should be given a few weeks before they first move into the dorm. Talk to your provider about the appropriate use of this vaccine.
Stephens DS. Neisseria meningitidis infections. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine. 24th ed. Philadelphia, PA: Elsevier Elsevier; 2011:chap 306.
Stephens DS, Apicella MA. Neisseria meningitidis. In: Mandell GL, Bennett JE, Dolin R, eds. Principles and Practice of Infectious Diseases. 8th ed. Philadelphia, PA: Elsevier Churchill Livingstone; 2014:chap 213.
- Last reviewed on 12/7/2014
- Jatin M. Vyas, MD, PhD, Associate Professor in Medicine, Harvard Medical School; Assistant in Medicine, Division of Infectious Disease, Department of Medicine, Massachusetts General Hospital, Boston, MA. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
This page was last updated: May 5, 2015 |
Heart palpitations are sensations that feel like your heart is fluttering, skipping a beat, racing or pounding. Palpitations are felt in your neck, chest or throat. They are usually not serious; however, they may create some anxiety--which actually aggravates the problem. It is only when palpitations indicate an abnormal heart rhythm that they are a symptom of a more serious condition, such as heart disease, low potassium or an abnormality in one of the heart valves.
Normal Heart Rate
According to the National Institute of Health’s MedlinePlus, normal heart rate is 60 to 100 times per minute. The heart rate of a person who engages in regular physical activity may drop lower than 55 beats per minute. A heart rate faster than 100 beats per minute is a condition known as tachycardia. An occasional extra heartbeat is known as a extrasystole, or ectopic heartbeat, which is also a characteristic of heart palpitations.
Exercise and Palpitations
Palpitations seldom occur during exercise. They usually occur before and after physical activity, according to the University of Iowa Hospitals and Clinics. As the heart rate increases during exercise, extra heartbeats or palpitations go away. After exercise, adrenaline levels remain high for a period of time as the heart rate slows down. Palpitations or extra heartbeats return during this period, sometimes at an increased rate and frequency.
Exercise Can Help
Exercise can actually help the problem by relieving stress--another contributing factor to palpitations--and keeping the heart in good condition. Changing up the types of activity you do to see if there is a change in frequency and occurrence of your heart palpitations may help you identify “trigger” activities you should probably modify or stop altogether.
Notify Your Doctor
If you have high blood pressure, high cholesterol or diabetes, you should notify your doctor if you begin experiencing heart palpitations. Also tell your doctor if there has been a sudden change in palpitation rhythm or frequency, your pulse is higher than 100 beats per minute with no anxiety, exercise or fever, or you frequently feel extra heartbeats coming at more than six per minute. It may also help to document the time, type and duration of the palpitations and how you felt when they occurred. This will help your doctor determine if there are any underlying causes.
When to Call 911
If your palpitations are accompanied by dizziness, light-headedness, shortness of breath, confusion, chest discomfort, faintness or passing out, you should immediately call 911. These are symptoms of potentially life-threatening conditions such as heart valve disease, heart attack, heart failure and stroke. |
p. 92. How the Bible was Written
- John Riches
‘How the Bible was Written’ examines some of the features of the Bible's composition. Over what period was it written? How was the text recorded and what is the relationship between oral and literary traditions. The Old Testament was written over a long period of more than 1,000 years, whilst the New Testament was written in a relatively short period. The literary world and the different means of composition of different types of literature are examined with reference to particular examples of the biblical text. Later writers draw on material from earlier books and these literary allusions play an important part in the biblical text. |
Powers and roots
Powers are used when we want to multiply a number by itself
When we wish to multiply a number by itself we use powers, or
indices as they are also called. For example, the quantity 7×
7×7×7 is usually written as . The
number 4 tells us the number of sevens to be multiplied together.
In this example, the power, or index, is 4. The number 7 is
called the base.
= 6×6 = 36. We say that "6 squared is 36", or "6
to the power 2 is 36".
= 2×2×2×2×2. We say that "2 to the power 5 is 32".
Your calculator will be pre-programmed to evaluate powers.
Most calculators have a button marked , or
simply ^. Ensure that you are using your calculator correctly by
verifying that 3 = 177147.
2. Square roots
When 5 is squared we obtain 25. That is 5 = 25.
The reverse of this process is called finding a square root.
The square root of 25 is 5. This is written as , or
Note also that when -5 is squared we again obtain 25, that is
= 25. This means that 25 has another square root, -5.
In general, a square root of a number is a number which when
squared gives the original number. There are always two square
roots of any positive number, one positive and one negative.
However, negative numbers do not possess any square roots.
Most calculators have a square root button, probably marked . Check
that you can use your calculator correctly by verifying that , to
four decimal places. Your calculator will only give the positive
square root but you should be aware that the second, negative
square root is -8.8882.
An important result is that the square root of a product of
two numbers is equal to the product of the square roots of the
two numbers. For example
However your attention is drawn to a common error which
students make. It is not true that .
Substitute some simple values for yourself to see that this
cannot be right.
1. Without using a calculator write down the value of
Find the square of the following:
3. Show that the square of is
1. 18, (and also -18).
2. a) 2, b) 12.
3. Cube roots and higher roots
The cube root of a number, is the number which when cubed
gives the original number. For example, because 4 = 64
we know that the cube root of 64 is 4, written All
numbers, both positive and negative, possess a single cube root.
Higher roots are defined in a similar way: because 2 = 32,
the fifth root of 32 is 2, written
1. Without using a calculator find
1. a) 3, b) 5.
Expressions involving roots, for example and are
also known as surds. Frequently, in engineering calculations it
is quite acceptable to leave an answer in surd form rather than
calculatingits decimal approximation with a calculator.
It is often possible to write surds in equivalent forms. For
example, can be written as , that
1. Write the following in their simplest surd form:
2. By multiplying numerator and denominator by , show |
Working with genetically engineered mice -- and especially their whiskers -- Johns Hopkins researchers report they have identified a group of nerve cells in the skin responsible for what they call "active touch," a combination of motion and sensory feeling needed to navigate the external world. The discovery of this basic sensory mechanism, described online April 20 in the journal Neuron, advances the search for better "smart" prosthetics for people, ones that provide more natural sensory feedback to the brain during use.
Study leader Daniel O'Connor, Ph.D., assistant professor of neuroscience at the Johns Hopkins University School of Medicine, explains that over the past several decades, researchers have amassed a wealth of knowledge about the sense of touch. "You can open up textbooks and read all about the different types of sensors or receptor cells in the skin," he says. "However, almost everything we know is from experiments where tactile stimulation was applied to the stationary skin--in other words, passive touch."
Such "passive touch," O'Connor adds, isn't how humans and other animals normally explore their world. For example, he says, people entering a dark room might search for a light switch by actively feeling the wall with their hands. To tell if an object is hard or soft, they'd probably need to press it with their fingers. To see if an object is smooth or rough, they'd scan their fingers back and forth across an object's surface.
Each of these forms of touch combined with motion, he says, is an active way of exploring the world, rather than waiting to have a touch stimulus presented. They each also require the ability to sense a body part's relative position in space, an ability known as proprioception.
While some research has suggested that the same populations of nerve cells, or neurons, might be responsible for sensing both proprioception and touch necessary for this sensory-motor integration, whether this was true and which neurons accomplish this feat have been largely unknown, O'Connor says.
To find out more, O'Connor and his team developed an experimental system with mice that allowed them to record electrical signals from specific neurons located in the skin, during both touch and motion.
The researchers accomplished this, they report, by working with members of a laboratory led by David Ginty, Ph.D., a former Johns Hopkins University faculty member, now at Harvard Medical School, to develop genetically altered mice. In these animals, a type of sensory neuron in the skin called Merkel afferents were mutated so that they responded to touch -- their "native" stimulus, and one long documented in previous research -- but also to blue light, which skin nerve cells don't normally respond to.
The scientists trained the rodents to run on a mouse-sized treadmill that had a small pole attached to the front that was motorized to move to different locations. Before the mice started running, the researchers used their touch-and-light sensitized system to find a single Merkel afferent near each animal's whiskers and used an electrode to measure the electrical signals from this neuron.
Much like humans use their hands to explore the world through touch, mice use their whiskers, explains O'Connor. Consequently, as the animals began running on the treadmill, they moved their whiskers back and forth in a motion that researchers call "exploratory whisking."
Using a high-speed camera focused on the animals' whiskers, the researchers took nearly 55,000,000 frames of video while the mice ran and whisked. They then used computer-learning algorithms to separate the movements into three different categories: when the rodents weren't whisking or in contact with the pole; when they were whisking with no contact; or when they were whisking against the pole.
They then connected each of these movements -- using video snapshots captured 500 times every second -- to the electrical signals coming from the animals' blue-light-sensitive Merkel afferents.
The results show that the Merkel afferents produced action potentials -- the electrical spikes that neurons use to communicate with each other and the brain -- when their associated whiskers contacted the pole. That finding wasn't particularly surprising, O'Connor says, because of these neurons' well-established role in touch.
However, he says, the Merkel afferents also responded robustly when they were moving in the air without touching the pole. By delving into the specific electrical signals, the researchers discovered that the action potentials precisely related to a whisker's position in space. These findings suggest that Merkel afferents play a dual role in touch and proprioception, and in the sensory-motor integration necessary for active touch, O'Connor says.
Although these findings are particular to mouse whiskers, he cautions, he and his colleagues believe that Merkel afferents in humans could serve a similar function, because many anatomical and physiological properties of Merkel afferents appear similar across a range of species, including mice and humans.
Besides shedding light on a basic biological question, O'Connor says, his team's research could also eventually improve artificial limbs and digits. Some prosthetics are now able to interface with the human brain, allowing users to move them using directed brain signals. While this motion is a huge advance beyond traditional static prosthetics, it still doesn't allow the smooth movement of natural limbs. By integrating signals similar to those produced by Merkel afferents, he explains, researchers might eventually be able to create prosthetics that can send signals about touch and proprioception to the brain, allowing movements akin to native limbs.
Other Johns Hopkins researchers who participated in this study include Kyle S. Severson, Duo Xu, Margaret Van de Loo, and Ling Bai.
Funding for this work was provided by the National Institutes for Health under grant numbers R01NS34814 and P30NS050274. O'Connor is supported by the Whitehall Foundation, Klingenstein Fund and the National Institutes of Health under grant number R01NS089652. |
Mars Pathfinder Sojourner Rover
The Mars Pathfinder Sojourner Rover, a lightweight machine on wheels, accomplished a revolutionary feat on the surface of Mars. For the first time, a thinking robot equipped with sophisticated laser eyes and automated programming reacted to unplanned events on the surface of another planet.
After a few days on the Martian surface the NASA controllers turned on Sojourner's hazard avoidance system and asked it to start making some of its own decisions. This hazard avoidance system set the rover apart from all other machines that have explored space. Sojourner made trips between designated points without the benefit of detailed information to warn it of obstacles along the way
Sojourner moved slowly at one and one half feet per minute and stopped a lot along the way to sense the terrain and process information, but there was no hurry on Mars which is not visited very often.
Sojourner was carried to Mars by Pathfinder which launched on December 4, 1996 and reached Mars on July 4, 1997, directly entering the planet's atmosphere and bouncing on inflated airbags.
Sojourner was designed by a large NASA team lead by Jacob Matijevic and Donna Shirley.
Sojouner traveled a total of about 100 meters (328 feet) in 230 commanded maneuvers, performed more than 16 chemical analyses of rocks and soil, carried out soil mechanics and technology experiments, and explored about 250 square meters (2691 square feet) of the Martian surface. During the mission, the spacecraft relayed an unprecedented 2.3 gigabits of data, including 16,500 images from the lander's camera, 550 images from the rover camera, 16 chemical analyses of rocks and soil, and 8.5 million measurements of atmospheric pressure, temperature and wind.
The flight team lost communication with the Sojouner September 27, after 83 days of daily commanding and data return. In all, the small 10.5 kilogram (23 lb) Sojouner operated 12 times its expected lifetime of seven days. |
It has long been known that installing white roofs helps reduce heat buildup in cities. But new research indicates that making surfaces more light-reflecting can have a significant impact on lowering extreme temperatures – not just in cities, but in rural areas as well.
ummers in the city can be extremely hot — several degrees hotter than in the surrounding countryside. But recent research indicates that it may not have to be that way. The systematic replacement of dark surfaces with white could lower heat wave maximum temperatures by 2 degrees Celsius or more. And with climate change and continued urbanization set to intensify “urban heat islands,” the case for such aggressive local geoengineering to maintain our cool grows.
The meteorological phenomenon of the urban heat island has been well known since giant cities began to emerge in the 19th century. The materials that comprise most city buildings and roads reflect much less solar radiation – and absorb more – than the vegetation they have replaced. They radiate some of that energy in the form of heat into the surrounding air.
The darker the surface, the more the heating. Fresh asphalt reflects only 4 percent of sunlight compared to as much as 25 percent for natural grassland and up to 90 percent for a white surface such as fresh snow.
Most of the roughly 2 percent of the earth’s land surface covered in urban development suffers from some level of urban heating. New York City averages 1-3 degrees C warmer than the surrounding countryside, according to the U.S. Environmental Protection Agency – and as much as 12 degrees warmer during some evenings. The effect is so pervasive that some climate skeptics have seriously claimed that global warming is merely an illusion created by thousands of once-rural meteorological stations becoming surrounded by urban development.
Climate change researchers adjust for such measurement bias, so that claim does not stand up. Nonetheless, the effect is real and pervasive. So, argues a recent study published in the journal Nature Geoscience, if dark heat-absorbing surfaces are warming our cities, why not negate the effect by installing white roofs and other light-colored surfaces to reflect back the sun’s rays?
Lighter land surfaces “could help to lower extreme temperatures by up to 2 or 3 degrees Celsius,” says one researcher.
During summer heat waves, when the sun beats down from unclouded skies, the creation of lighter land surfaces “could help to lower extreme temperatures… by up to 2 or 3 degrees Celsius” in much of Europe, North America, and Asia, says Sonia Seneviratne, who studies land-climate dynamics at the Swiss Federal Institute of Technology (ETH) in Zurich, and is co-author of the new study. It could save lives, she argues, and the hotter it becomes, the stronger the effect.
Seneviratne is not alone in making the case for boosting reflectivity. There are many small-scale initiatives in cities to make roof surfaces more reflective. New York, for instance, introduced rules on white roofs into its building codes as long ago as 2012. Volunteers have taken white paint to nearly 7 million square feet of tar roofs in the city, though that is still only about 1 percent of the potential roof area.
Chicago is trying something similar, and last year Los Angeles began a program to paint asphalt road surfaces with light gray paint. Outside the United States, cool-roof initiatives in cities such as Melbourne, Australia are largely limited to encouraging owners to cool individual buildings for the benefit of their occupants, rather than trying to cool cities or neighborhoods.
The evidence of such small-scale programs remains anecdotal. But now studies around the world are accumulating evidence that the benefits of turning those 1 percents into 100 percents could be transformative and could save many lives every year.
Keith Oleson of the National Center for Atmospheric Research in Boulder, Colorado looked at what might happen if every roof in large cities around the world were painted white, raising their reflectivity — known to climate scientists as albedo — from a typical 32 percent today to 90 percent. He found that it would decrease the urban heat island effect by a third — enough to reduce the maximum daytime temperatures by an average of 0.6 degrees C, and more in hot sunny regions such as the Arabian Peninsula and Brazil.
Other studies suggest even greater benefits in the U.S. In a 2014 paper, Matei Georgescu of Arizona State University found that “cool roofs” could cut temperatures by up to 1.5 degrees C in California and 1.8 degrees in cities such as Washington, D.C.
But it may not just be urban areas that could benefit from a whitewashing. Seneviratne and her team proposed that farmers could cool rural areas, too, by altering farming methods. Different methods might work in different regions with different farming systems. And while the percentage changes in reflectivity that are possible might be less than in urban settings, if applied over large areas, she argues that they could have significant effects.
In Europe, grain fields are almost always plowed soon after harvesting, leaving a dark surface of soil to absorb the sun’s rays throughout the winter. But if the land remained unplowed, the lightly colored stubble left on the fields after harvesting would reflect about 30 percent of sunlight, compared to only 20 percent from a cleared field. It sounds like a relatively trivial difference, but over large areas of cropland could reduce temperatures in some rural areas on sunny days by as much as 2 degrees C, Seneviratne’s colleague Edouard Davin has calculated.
In North America, early plowing is much less common. But Peter Irvine, a climate and geoengineering researcher at Harvard University, has suggested that crops themselves could be chosen for their ability to reflect sunlight. For instance, in Europe, a grain like barley, which reflects 23 percent of sunlight, could be replaced by sugar beet, an economically comparable crop, which reflects 26 percent. Sometimes, farmers could simply choose more reflective varieties of their preferred crops.
Again, the difference sounds marginal. But since croplands cover more than 10 percent of the earth’s land surface, roughly five times more than urban areas, the potential may be considerable.
Reducing local temperatures would limit evaporation, and so potentially could reduce rainfall downwind.
On the face of it, such initiatives make good sense as countries struggle to cope with the impacts of climate change. But there are concerns that if large parts of the world adopted such policies to relieve local heat waves, there could be noticeable and perhaps disagreeable impacts on temperature and rainfall in adjacent regions. Sometimes the engineers would only be returning reflectivity to the conditions before urbanization, but even so, it could end up looking like back-door geoengineering.
Proponents of local projects such as suppressing urban heat islands say they are only trying to reverse past impacts of inadvertent geoengineering through urbanization and the spread of croplands. Moreover, they argue that local engineering will have only local effects. “If all French farmers were to stop plowing up their fields in summer, the impact on temperatures in Germany would be negligible,” Seneviratne says.
“Local radiative management differs from global geoengineering in that it does not aim at effecting global temperatures [and] global effects would be negligible,” she says. It is “a measure of adaptation.”
But things might not always be quite so simple. Reducing local temperatures would, for instance, limit evaporation, and so potentially could reduce rainfall downwind. A modeling study by Irvine found that messing with the reflectivity of larger areas such as deserts could cause a “large reduction in the intensity of the Indian and African monsoons in particular.” But the same study concluded that changing albedo in cities or on farmland would be unlikely to have significant wider effects.
What is clear is that tackling urban heat islands by increasing reflectivity would not be enough to ward off climate change. Oleson found that even if every city building roof and stretch of urban pavement in the world were painted white, it would only delay global warming by 11 years. But its potential value in ameliorating the most severe consequences of excess heat in cities could be life-saving.
The urban heat island can be a killer. Counter-intuitively, the biggest effects are often at night. Vulnerable people such as the old who are stressed by heat during the day badly need the chance to cool down at night. Without that chance, they can succumb to heat stroke and dehydration. New research published this weekunderlines that temperature peaks can cause a spike in heart attacks. This appears to be what happened during the great European heat wave of 2003, during which some 70,000 people died, mostly in homes without air conditioning. Doctors said the killer was not so much the 40-degree C daytime temperatures (104 degrees F), but the fact that nights stayed at or above 30 degrees (86 degrees F).
Such urban nightmares are likely to happen ever more frequently in the future, both because of the expansion of urban areas and because of climate change.
Predicted urban expansion in the U.S. this century “can be expected to raise near-surface temperatures 1-2 degrees C… over large regional swathes of the country,” according to Georgescu’s 2014 paper. Similar threats face other fast-urbanizingparts of the world, including China, India, and Africa, which is expected to increase its urban land area six-fold from 1970 to 2030, “potentially exposing highly vulnerable populations to land use-driven climate change.”
Several studies suggest that climate change could itself crank up the urban heat island effect. Richard Betts at Britain’s Met Office Hadley Centre forecasts that it will increase the difference between urban and rural temperatures by up to 30 percent in some places, notable in the Middle East and South Asia, where deaths during heat waves are already widespread.
A combination of rising temperatures and high humidity is already predicted to make parts of the Persian Gulf region the first in the world to become uninhabitable due to climate change. And a study published in Februarypredicted temperatures as much as 10 degrees C hotter in most European cities by century’s end.
No wonder the calls to cool cities are growing.
A city-wide array of solar panels could reduce summer maximum temperatures in some cities by up to 1 degree C.
Another option is not to whitewash roofs, but to green them with foliage. This is already being adopted in many cities. In 2016, San Francisco became the first American city to make green roofs compulsory on some new buildings. New York last year announced a $100-million program for cooling neighborhoods with trees. So which is better, a white roof or a “green” roof?
Evidence here is fragmentary. But Georgescu found a bigger direct cooling effect from white roofs. Vincenzo Costanzo, now of the University of Reading in England, has reached a similar conclusion for Italian cities. But green roofs may have other benefits. A study in Adelaide, Australia, found that besides delivering cooling in summer, they also act as an insulating layer to keep buildings warmer in winter.
There is a third option competing for roof space to take the heat out of cities — covering them in photovoltaic cells. PV cells are dark, and so do not reflect much solar radiation into space. But that is because their business is to capture that energy and convert it into low-carbon electricity.
Solar panels “cool daytime temperatures in a way similar to increasing albedo via white roofs,” according to a study by scientists at the University of New South Wales. The research, published in the journal Scientific Reports last year, found that in a city like Sydney, Australia, a city-wide array of solar panels could reduce summer maximum temperatures by up to 1 degree C.
That is the theory, but there are concerns about whether it will always work in practice. Studies into the impact on local temperatures of large solar farms in deserts have produced some contradictory findings. For while they prevent solar rays from reaching the desert surface, they also act as an insulating blanket at night, preventing the desert sands from losing heat. The net warming effect has been dubbed a “solar heat island.”
The lesson then is that light, reflective surfaces can have a dramatic impact in cooling the surrounding air – in cities, but in the countryside too. Whitewashed walls, arrays of photovoltaic cells, and stubble-filled fields can all provide local relief during the sweltering decades ahead. But policymakers beware. It doesn’t always work like that. There can be unintended consequences, both on temperature and other aspects of climate, like rainfall. Even local geoengineering needs to be handled with care.
Source: Yale Environment
This article is culled from daily press coverage from around the world. It is posted on the Urban Gateway by way of keeping all users informed about matters of interest. The opinion expressed in this article is that of the author and in no way reflects the opinion of UN-Habitat. |
Scientists have discovered a significant number of bugs living in the middle and upper troposphere, the airy layer eight to 15km above the Earth’s surface. The microbes could have a previously unrecognised impact on cloud formation.
Long distance travel by the airborne organisms may also help spread infections around the world, researchers believe.
The bugs were discovered in air samples scooped up by a DC-8 aircraft flying over both land and sea across the US, Caribbean and western Atlantic.
Scientists are still unsure whether the bacteria and fungi they found routinely inhabit the sky, living off carbon compounds, or are continually borne aloft by winds and air currents.
“We did not expect to find so many micro-organisms in the troposphere, which is considered a very difficult environment for life,” lead researcher Dr Kostas Konstantinidis, from the Georgia Institute of Technology in the US, said. |
Exponential Models in the Sciences
Lesson 8 of 11
Objective: SWBAT Model scientific phenomena with exponential functions in order to solve problems.
To help my students understand the power of a mathematical model, I present them with a bivariate data set that can be modeled with an exponential function. NSpire Exponential Regression Cooling Coffee.docx is an activity that walks students through the steps of performing an exponential regression on cooling data [MP4].
I ask students to complete this activity independently so that every student gets practice manipulating the graphing calculator. As students work on this activity, I circulate to provide help with the technology as needed [MP6].
The exponential function unit provides many opportunities to connect what students are doing in math class to topics they study in other high school classes. Through conversations with science, social studies, and business teachers at my school I have developed several problem sets that help my students see these connections. WS Exponential Problems in Science.docx is a problem set that focuses on exponential content from biology, chemistry and physics.
I provide each student with a copy of WS Exponential Problems in Science and make the answer key available through Edmodo. Students spend the remainder of the class working through this packet with my help and the help of their peers.
My students have spent the period delving into some challenging exponential word problems which they will complete for homework. It is likely that only a few of my students will completely understand every problem in the set. However, it is essential that every student can apply continuous growth models of the form N(t)=Noe^(kt) [MP4]. Exit Ticket Radioactive Decay is a half sheet that I ask each student to complete and turn in to me before they leave class. Evaluating these exit tickets helps me know whether to spend more time in whole-class discussions about these problems, or whether to pull individual students aside for more instruction. |
Galileo Galilei was an Italian astronomer who was admonished for his views on heliocentrism, an astronomical model in which the Earth and planets revolve around the Sun at the center of the Solar System. This was in opposition to geocentrism, which placed the Earth at the center and that all heavenly bodies, including the sun, revolved around the Earth. In the Christian world prior to Galileo's conflict with the Church, the majority of educated people subscribed to the geocentric view. The Inquisition concluded that his theory could only be supported as a possibility, not as an established fact, and was found to be "vehemently suspect of heresy". Galileo was forced to recant, his scientific work was placed on the Index Librorum Prohibitorum, and he spent the rest of his life under house arrest. In 1992, Pope John Paul II issued a declaration acknowledging the errors committed by the Catholic Church tribunal that judged his scientific positions.
|5. A Short Guide to Catholic Church History| |
Operation Spring Awakening was a World War II German operation which began on March 6, 1945, in Hungary. Known as Unternehmen Frühlingserwachen in German, the operation main objective was to capture the Hungarian oil fields from the Soviets. Spring Awakening was the last important offensive mounted by Germany before the war ended. It was a desperate and brave attempt to keep vital oil supplies.
The German plan for the offensive of March 1945 had been kept in air-tight secrecy. Frühlingserwachen was initiated in the early hours of March 6, 1945, with the 6th SS Panzer Army leading the attack, which was concentrated in the Lake Balaton area where the important oil reserves were situated. Although the terrain was muddy and rough, the Germans still managed to carry out their attack effectively, taking the Soviets by surprise. The Germans made fast and big territorial gains, which had not been seen, perhaps, since Operation Barbarossa.
Nevertheless, the 6th SS Panzer, under the command of Sepp Dietrich, and the 2nd Panzer Army lacked two vital support elements: air superiority and logistics. This made Operation Spring Awakening grind to a halt by March 14. Highly outnumbered by the Soviet forces, the Germans began to fall back to their initial positions by March 16, when the Red Army launched a massive counterattack. |
Year 6:A Diverse And Connected World is part of the Australian Geography Series which comprises nine books in total. This book has been written specifically for students in Year 6, who are living in Australia and studying Geography. The activity book is arranged into three sections: Connecting Places, A Global Study and Environmental Hazards. Each section is closely linked to the Australian National Curriculum.
The first section, Connecting Places, is designed to raise students’ awareness that places are linked to one another. It explores Australia’s connection to the Asia region through trade, tourism, aid and historic ties. Students are also asked in this section to use geographic tools to locate different parts of Asia on the map.
The second section, A Global Study, examines different regions in the world and their populations. Students will be asked to explore concepts such as: why citizens in some countries have higher living standards than others and how the natural resources in a place generate industries and employment. Students will reflect on the causes of poverty in the world and research programmes in Australia and other parts of the world that aim to bridge the gap between developed and developing countries. This section also considers the similarities and differences in religions between Australia and selected countries of the Asia region.
The third section, Environmental Hazards, focuses on natural disasters that affect people and places and our responses to these hazards. Tasks will require students to assess the risks of various environmental hazards and evaluate action plans for survival. A major component of this section is a case study on the Black Saturday bushfires in Victoria. This environmental disaster will be considered from multiple perspectives by students, who will then synthesise their research findings to suggest prevention and management strategies.
Year 6: A Diverse And Connected World is a teacher-friendly resource for 11-12 year olds studying Geography. An inquiry-based approach is applied in the activities and research tasks. Students are challenged to weigh up the visual and graphic data presented, to form their own understandings about how people and places are connected to one another and the world.
Author: Lisa Craig |
Galileo was born in Pisa, Italy on February 15, 1564. His father, Vincenzo Galilei, was a musician. Galileo's mother was Giuliadegli Ammannati. Galileo was the first of six (though some people believe seven) children. His family belonged to the nobility but was not rich. In the early 1570's, he and his family moved to Florence. Galileo was never married. However, he did have a brief relationship with Marina Gamba, a woman he met on one of his many trips to Venice. Marina lived in Galileo's house in Padua where she bore him three children. His two daughters, Virginia and Livia, were both put in convents where they became, respectively, Sister Maria Celeste and Sister Arcangela. In 1610, Galileo moved from Padua to Florence where he took a position at the Court of the Medici family. He left his son, Vincenzio, with Marina Gamba in Padua. In 1613, Marina married Giovanni Bartoluzzi, and Vincenzio joined his father in Florence.
In 1592, Galileo was appointed professor of mathematics at the University of Padua. While teaching there, he frequently visited a place called the Arsenal, where Venetian ships were docked and loaded. Galileo had always been interested in mechanical devices. Naturally, during his visits to the Arsenal, he became fascinated by nautical technologies, such as the sector and shipbuilding. In 1593, he was presented with the problem involving the placement of oars in galleys. He treated the oar as a lever and correctly made the water the fulcrum. A year later, he patented a model for a pump. His pump was a device that raised water by using only one horse. |
What is 7/7 plus 4/3?
It can sometimes be difficult to add fractions, such as 7/7 plus 4/3.
But it's no problem! We have displayed the answer below:
7/7 + 4/3 = 2 1/3
How did we solve the problem above? When we add two fractions, such as 7/7 + 4/3, we make sure that the two denominators are the same and then we simply add the numerators.
In cases where the denominators are not the same, we find the lowest common denominator and adjust the fractions to keep them intact.
We also simplify the answers to fraction problems whenever possible.
How To Add Fractions
Learn how to calculate 7/7 + 4/3. Go here for step-by-step instructions on how to add fractions. |
Neurodegenerative diseases damage and destroy neurons, ravaging both mental and physical health. Parkinson’s disease, which affects over 10 million people worldwide, is no exception. The most obvious symptoms of Parkinson’s disease arise after the illness damages a specific class of neuron located in the midbrain. The effect is to rob the brain of dopamine—a key neurotransmitter produced by the affected neurons.
In new research, Jeffrey Kordower and his colleagues describe a process for converting non-neuronal cells into functioning neurons able to take up residence in the brain, send out their fibrous branches across neural tissue, form synapses, dispense dopamine and restore capacities undermined by Parkinson’s destruction of dopaminergic cells.
The current proof-of-concept study reveals that one group of experimentally engineered cells performs optimally in terms of survival, growth, neural connectivity, and dopamine production, when implanted in the brains of rats. The study demonstrates that the result of such neural grafts is to effectively reverse motor symptoms due to Parkinson’s disease.
Stem cell replacement therapy represents a radical new strategy for the treatment of Parkinson’s and other neurodegenerative diseases. The futuristic approach will soon be put to the test in the first of its kind clinical trial, in a specific population of Parkinson’s disease sufferers, bearing a mutation in the gene parkin. The trial will be conducted at various locations, including the Barrow Neurological Institute in Phoenix, with Kordower as principal investigator.
The work is supported through a grant from the Michael J. Fox Foundation.
“We cannot be more excited by the opportunity to help individuals who suffer from this genetic form of Parkinson’s disease, but the lessons learned from this trial will also directly impact patients who suffer from sporadic, or non-genetic forms of this disease,” Kordower says.
Kordower directs the ASU-Banner Neurodegenerative Disease Research Center at Arizona State University and is the Charlene and J. Orin Edson Distinguished Director at the Biodesign Institute. The new study describes in detail the experimental preparation of stem cells suitable for implantation to reverse the effects of Parkinson’s disease.
The research appears in the current issue of the npj journal Nature Regenerative Medicine.
New perspectives on Parkinson’s disease
You don’t have to be a neuroscientist to identify a neuron. Such cells, with their branching arbor of axons and dendrites are instantly recognizable and look like no other cell type in the body. Through their electrical impulses, they exert meticulous control over everything from heart rate to speech. Neurons are also the repository of our hopes and anxieties, the source of our individual identity.
Degeneration and loss of dopaminergic neurons causes the physical symptoms of rigidity, tremor, and postural instability, which characterize Parkinson’s disease. Additional effects of Parkinson’s disease can include depression, anxiety, memory deficit, hallucinations and dementia.
Due to an aging population, humanity is facing a mounting crisis of Parkinson’s disease cases, with numbers expected to swell to more than 14 million globally by 2040. Current therapies, which include use of the drug L-DOPA, are only able to address some of the motor symptoms of the disease and may produce serious, often intolerable side effects after 5-10 years of use.
There is no existing treatment capable of reversing Parkinson’s disease or halting its pitiless advance. Far-sighted innovations to address this pending emergency are desperately needed.
A (pluri) potent weapon against Parkinson’s
Despite the intuitive appeal of simply replacing dead or damaged cells to treat neurodegenerative disease, the challenges for successfully implanting viable neurons to restore function are formidable. Many technical hurdles had to be overcome before researchers, including Kordower, could begin achieving positive results, using a class of cells known as stem cells.
The interest in stem cells as an attractive therapy for a range of diseases rapidly gained momentum after 2012, when John B. Gurdon and Shinya Yamanaka shared the Nobel Prize for their breakthrough in stem cell research. They showed that mature cells can be reprogrammed, making them “pluripotent”—or capable of differentiating into any cell type in the body.
These pluripotent stem cells are functionally equivalent to fetal stem cells, which flourish during embryonic development, migrating to their place of residence and developing into heart, nerve, lung, and other cell types, in one of the most remarkable transformations in nature.
Adult stem cells come in two varieties. One type can be found in fully developed tissues like bone marrow, liver, and skin. These stem cells are few in number and generally develop into the type of cells belonging to the tissue they are derived from.
The second kind of adult stem cells (and the focus of this study) are known as induced pluripotent stem cells (iPSCs). The technique for producing the iPSCs used in the study occurs in two phases. In a way, the cells are induced to time travel, initially, in a backward and then a forward direction.
First, adult blood cells are treated with specific reprogramming factors that cause them to revert to embryonic stem cells. The second phase treats these embryonic stem cells with additional factors, causing them to differentiate into the desired target cells—dopamine-producing neurons.
“The major finding in the in the present paper is that the timing in which you give the second set of factors is critical,” Kordower says. “If you treat and culture them for 17 days, and then stop their divisions and differentiate them, that works best.”
Pitch perfect neurons
The study’s experiments included iPSCs cultured for 24 and 37 days, but those cultured for 17 days prior to their differentiation into dopaminergic neurons were markedly superior, capable of surviving in greater numbers and sending out their branches over long distances. “That's important,” Kordower says, “because they're going to have to grow long distances in the larger human brain and we now know that these cells are capable of doing that.”
Rats treated with the 17-day iPSCs showed remarkable recovery from the motor symptoms of Parkinson’s disease. The study further demonstrates that this effect is dose dependent. When a small number of iPSCs were grafted into the animal brain, recovery was negligible, but a large complement of cells produced more profuse neural branching, and complete reversal of Parkinson’s symptoms.
The initial clinical trial will apply iPSC therapy to a group of Parkinson’s patients bearing a particular genetic mutation, known as a Parkin mutation. Such patients suffer from the typical symptoms of motor dysfunction found in general or idiopathic Parkinson’s, but do not suffer from cognitive decline or dementia. This cohort of patients provides an ideal testing ground for cell replacement therapy. If the treatment is effective, larger trials will follow, applying the strategy to the version of Parkinson’s affecting most patients stricken with the disease.
Further, the treatment could potentially be combined with existing therapies to treat Parkinson’s disease. Once the brain has been seeded with dopamine-producing replacement cells, lower doses of drugs like L-DOPA could be used, mitigating side effects, and enhancing beneficial results.
The research sets the stage for the replacement of damaged or dead neurons with fresh cells for a broad range of devastating diseases.
“Patients with Huntington's disease or multiple system atrophy or even Alzheimer’s disease could be treated in this way for specific aspects of the disease process,” Kordower says.
npj Regenerative Medicine
Method of Research
Subject of Research
Optimizing maturity and dose of iPSC-derived dopamine progenitor cell therapy for Parkinson’s disease
Article Publication Date |
Compressed Zip File
Be sure that you have an application to open this file type before downloading and/or purchasing. How to unzip files.
7.73 MB | 64 pages
Word Problems are a large part of the Common Core and these are great for Interactive Notebooks. Not only do students need to know how to solve them, but they also need to know how to show and share their thinking.
This resource contains 52 different tasks for your students to solve and prove their thinking. Each task is aligned to a Common Core standard and labeled with that standard.
Students cut and glue the word problem into their Interactive Notebook. (They have been created 2 to a page to save paper and make it more manageable for you, the teacher.) Students use the area below the problem to show their thinking and then the answer line to share their final answer.
The set includes 2 problems for each Operations and Algebraic Thinking, Numbers in Base Ten, Measurement & Data, Geometry standards for second grade. |
The standard model of particle physics describes the elementary particles (the smallest building blocks of our world) and the forces that act between them: the electromagnetic force, the strong force (that holds together the atomic nucleus), and the weak force that is responsible for certain forms of radioactive decay. The matter building blocks contain the quarks (the particles that make up protons and neutrons) and leptons, which include the electron.
The standard model is the most precise and successful theory ever developed—in some cases, theory and experiment agree out to ten significant digits. The only missing building block of the model is the famous—and, so far, elusive—Higgs particle, which is needed to give mass to all other elementary particles.
Despite its success, most physicists are convinced that the standard model cannot be the ultimate theory. The most apparent problem with the model is that it cannot explain dark matter. Over the last ten years, several experiments established independently that the visible matter (from which humans, the earth, the solar system, and all galaxies are made of) constitute only a small fraction of all matter in the universe. What constitutes the remaining, missing matter—the dark matter—remains one of the great mysteries of modern physics. |
|Geographical Range||Northern South America|
|Habitat||Forests, woodlands, plains, savannahs|
|Scientific Name||Epicrates cenchria cenchria|
The Brazilian rainbow boa is the largest of the rainbow boas, reaching six or more feet in length. Rainbow boas get their name from the multicolored sheen of their skin, caused by light reflecting off tiny ridges on their scales.
Rainbow boas prowl for food at night and sleep during the day. Although they usually rest in a tree or bush, they spend most of their waking time on the ground. They feed on birds, their eggs, small mammals, lizards, and frogs. Like all boa constrictors, rainbow boas kill their prey by suffocating it as they squeeze the victim's body with their muscular coils. |
Legionella longbeachae infection - including symptoms, treatment and prevention
Many different species of bacteria called Legionella are commonly found in the environment and some of these are known to cause illness in people. Infection by Legionella causes a disease known as legionellosis.
Legionella longbeachae infection is a notifiable condition1
How Legionella longbeachae is spread
Legionella longbeachae (L. longbeachae) can be found in potting mixes, compost heaps and composted animal manures. How L. longbeachae are spread is uncertain, but it is thought that they are breathed in or spread from hand to mouth. The bacteria can remain on hands contaminated by handling potting mix for periods of up to 1 hour. They can be readily removed from the hands by washing. Legionella infection cannot be caught from other people or animals. The risk of L. longbeachae infection is not limited to gardeners, but the use of potting mixes, composts and other soils puts them at greater risk.
Signs and symptoms
L. longbeachae generally causes infection of the lung (pneumonia), which is a severe illness.
Symptoms of Legionella infection may include:
- chest pain
People of any age may be infected, but the disease is more common in middle aged and older people and people whose immune system is weak. Men are affected more frequently than women.
Risk of infection is increased by:
- chronic heart or lung disease
- kidney failure
- some forms of cancer
- immunosuppression, especially if on steroid medication
- being 50 years or older.
Diagnosis is usually made by a series of blood tests. The bacteria may sometimes be grown from a sample of sputum (phlegm) or lung fluid, or detected using other special tests.
(time between becoming infected and developing symptoms)
2 to 10 days, usually 5 to 6 days.
(time during which an infected person can infect others)
Person-to-person spread does not occur.
Antibiotic treatment may be prescribed by the treating doctor. Some cases may require admission to hospital.
Exclusion from childcare, preschool, school or work is not necessary.
To minimise the risk of exposure when handling garden mixes (bagged or unbagged) such as potting mix, mulches, composts and garden soils, gardeners should take the following precautions:
- read the warning on bagged mixes and follow the manufacturer’s instructions
- avoid inhaling airborne particles such as dust or mists
- avoid hand-to-mouth contact
- open bagged mixes in a well-ventilated space
- moisten the garden mix, avoiding the inhalation of airborne particles
- always wash hands after using garden mixes, even if gloves have been worn
- store bagged mixes in a cool dry place.
Additional measures that can be taken to reduce risk include wearing a face mask and gloves.
- Legionella regulations, guidelines and fact sheets
- When you have a notifiable condition
- Keeping areas clean
1 – In South Australia the law requires doctors and laboratories to report some infections or diseases to SA Health. These infections or diseases are commonly referred to as 'notifiable conditions'. |
Cognitive constructivism has its roots in cognitive psychology and biology and an approach to education that lays emphasis on how the individual learner "maker of meanings" and the ways knowledge is created in order to adapt to the world in which the mechanisms of accommodation and assimilation are key to this processing.
The table below compares the traditional classroom to the constructivist one.
|Traditional Classroom||Constructivist Classroom|
|Curriculum begins with the parts of the whole. Emphasizes basic skills.||Curriculum emphasizes big concepts, beginning with the whole and expanding to include the parts.|
|Strict adherence to fixed curriculum is highly valued.||Pursuit of student questions and interests is valued.|
|Materials are primarily textbooks and workbooks.||Materials include primary sources of material and manipulative materials.|
|Learning is based on repetition.||Learning is interactive, building on what the student already knows.|
|Teachers disseminate information to students; students are recipients of knowledge.||Teachers have a dialogue with students, helping students construct their own knowledge.|
|Teacher's role is directive, rooted in authority.||Teacher's role is interactive, rooted in negotiation.|
|Assessment is through testing, correct answers.||Assessment includes student works, observations, and points of view, as well as tests. Process is as important as product.|
|Knowledge is seen as inert.||Knowledge is seen as dynamic, ever changing with our experiences.|
|Students work primarily alone.||Students work primarily in groups.|
Piaget's theory is fundamental for constructivist education. His work led to the expansion of understanding of child development and learning as a process of construction that has underpinned much of the theories relating to constructivism (Sawyer, 2006).
One of Piaget's most prominent contributions is to explain how knowledge develops. A key assumption of constructivism is that mental structures are created from earlier structures, not directly from environmental information (Schunk, 2000). From this perspective then knowledge is not passively transmitted from the environment to the individual, but rather is the result of active cognizing from the cumulative experiences of the individual. To further illustrate the internal and individual constructions of knowledge, Piaget defined three essential components, namely equilibration, assimilation and accommodation to describe the growth of Knowledge. In Piagetian terms,
- Equilibration is the central learning mechanism and the motivating force behind cognitive development, which refers to the optimal state of the cognitive structures being really consistent with the external environment.
- Assimilation and Accommodation are complementary processes to deal with the cognitive conflict.
In this way, the linked processes are the means by which the state of equilibrium (or adaptation) is sought. So the child is either applying previously acquired skills to a new situation in order to understand it or adjusting the skills or accommodating acquired skills to better understand a situation.
Cognitive constructivism is linked to instructional approaches and strategies such as: |
Test-driven development (TDD) is a technique of using automated unit tests to drive the design of software and force decoupling of dependencies. The result of using this practice is a comprehensive suite of unit tests that can be run at any time to provide feedback that the software is still working.
The concept is to “get something working now and perfect it later.” After each test, refactoring is done and then the same or a similar test is performed again. The process is iterated as many times as necessary until each unit is functioning according to the desired specifications.
ATDD stands for Acceptance Test Driven Development, it is also less commonly designated as Storytest Driven Development (STDD). It is a technique used to bring customers into the test design process before coding has begun. It is a collaborative practice where users, testers, and developers define automated acceptance criteria. ATDD helps to ensure that all project members understand precisely what needs to be done and implemented. Failing tests provide quick feedback that the requirements are not being met. The tests are specified in business domain terms. Each feature must deliver real and measurable business value: indeed, if your feature doesn’t trace back to at least one business goal, then you should be wondering why you are implementing it in the first place.
Behavior-Driven Development (BDD) combines the general techniques and principles of TDD with ideas from domain-driven design. BDD is a design activity where you build pieces of functionality incrementally guided by the expected behavior. The focus of BDD is the language and interactions used in the process of software development. Behavior-driven developers use their native language in combination with the language of Domain Driven Design to describe the purpose and benefit of their code.
A team using BDD should be able to provide a significant portion of “functional documentation” in the form of User Stories augmented with executable scenarios or examples. BDD is usually done in very English-like language helps the Domain experts to understand the implementation rather than exposing the code level tests. It’s usually defined in a GWT format: GIVEN WHEN & THEN.
TDD is rather a paradigm than a process. It describes the cycle of writing a test first, and application code afterwards – followed by an optional refactoring. But it doesn’t make any statements about: Where do I begin to develop? What exactly should I test? How should tests be structured and named? .When your development is Behavior-Driven, you always start with the piece of functionality that’s most important to your user.
TDD and BDD have language differences, BDD tests are written in an english-like language.
BDD focuses on the behavioral aspect of the system unlike TDD that focuses on the implementation aspect of the system.
ATDD focuses on capturing requirements in acceptance tests and uses them to drive the development. (Does the system do what it is required to do?)
BDD is customer-focused while ATDD leans towards the developer-focused side of things like [Unit]TDD does. This allows much easier collaboration with non-techie stakeholders, than TDD.
TDD tools and techniques are usually much more techie in nature, requiring that you become familiar with the detailed object model (or in fact create the object model in the process, if doing true test-first canonical TDD). The typical non-programming executive stakeholder would be utterly lost trying to follow along with TDD.
BDD gives a clearer understanding as to what the system should do from the perspective of the developer and the customer.
TDD allows a good and robust design, still, your tests can be very far away of the users requirements. BDD is a way to ensure consistency between requirements and the developer tests. |
Scientists in Spain say they have observed a record-breaking impact on the Moon. They spotted a meteorite with about the mass of half a tonne crashing into the lunar surface last September. The collision, they say, would have generated a flash of light so bright that it would have been easily visible from Earth. The findings of the Spanish scientists were reported in the Monthly Notices of the Royal Astronomical Society,
Scientists say that this is the largest, brightest impact we have ever observed of the moon. This explosive strike was spotted by the Moon Impacts Detection and Analysis System (MIDAS) by telescopes in southern Spain on the 11th September at 20:07 (British time)
Most commonly, lunar impacts have a very short duration, sometimes just a fraction of a second. Although this particular collision lasted over eight seconds. The brightness of the impact was roughly the same as the Pole star, which makes this the brightest impact that we have ever recorded on Earth.
The researchers say a lump of rock weighing about 400kg slammed into the surface of the moon at 61,000km/h. They believe that this dense mass hit the lunar surface with energy equivalent to about 15 tonnes of TNT. This is about three times more explosive than another lunar impact recorded by NASA last March. |
Currents are commonly measured with sound. There are several different ways to measure currents with sound. An instrument called an Acoustic Doppler Current Profiler or ADCP is often used to measure the current in specific places like shipping channels, rivers and streams, and at buoys. They are also called Acoustic Doppler Profilers (ADP). ADCPs can be placed on the bottom of the ocean, attached to a buoy or mounted on the bottom of ships.
RAFOS floats (SOFAR spelled backwards) also use sound to measure currents. RAFOS Floats are typically used in the open ocean to measure a current like the Gulf Stream.
Acoustic Doppler Current Profiler
An ADCP sends out a sound pulse. The sound pulse is at a very high frequency, from 40kHz to 3,000 kHz. The human ear can hear frequencies up to 20kHz and even dolphins only hear frequencies up to 120kHz. At such high frequencies the wavelength is very small, about 6 mm to 0.5mm.
The sound pulse from the ADCP will reflect off small particles in the water. These small particles may be fine silt or small living creatures like plankton. Even very clear water has many small particles in it. The ADCP listens with a hydrophone for the sound that is bounced off the small particles.
The measurement of currents with sound depends on the Doppler effect. The Doppler effect is a change in frequency of a sound due to the motion of the source of the sound or the motion of the listener. The most common example of the Doppler effect is the change in frequency of a train whistle. As the train comes toward you, the frequency increases. This Doppler effect is because the motion of the train is squeezing the sound waves. As the train moves away, the frequency decreases because the train's movement is stretching out the sound waves. The Doppler shift also occurs in the water.
In the animation below, the sound source is moving toward Observer B and away from observer A. Observer B will hear a higher frequency sound and Observer A will hear a lower frequency sound.
The ADCP sends out a sound that reflects off small particles and returns to the ADCP. If those particles are in a current, then those particles are moving with the current. There will be a Doppler shift in the frequency of the sound that reflects off the small particles and returns to the ADCP. That Doppler shift can be used to calculate the current speed. Most ADCPs have 3 or 4 sound sources that work together. By using several sources, the ADCP can tell the direction of the current as well as its speed. The ADCP can also tell at what depths in the water column the current is moving by how long it takes the sound to return to the ADCP.
RAFOS floats (SOFAR spelled backward) are floating instruments designed to move with the water and track the water's movements. The float contains a hydrophone and signal processing circuits, a microprocessor and a battery. The RAFOS float has an accurate clock with which to determine the arrival times of acoustic transmissions. The RAFOS Float keeps track of its own position by listening for the signal from sound sources in the water near the study area. The RAFOS Float uses the time of travel and the phase of the sound to determine its position. Because the RAFOS Float moves with the current, the float's position tracks the path of the current.
A key element of the RAFOS float is that the instrument does not have to be retrieved for its data to be analyzed. The float is designed to return to the surface and telemeter its data to a satellite system at the end of each mission. The RAFOS float can be weighted to float at any depth. |
Tutoring chemistry, the periodic table is important. The tutor mentions a peculiarity of it.
The transition metals are the elements in the middle of the periodic table, starting the fourth row from the top. Their first element is scandium (Sc).
On some periodic tables, you’ll notice that scandium (Sc, 21) is under group IIIB, then follows titanium (Ti, 22) under IVB, and so on. However, if you continue across, you’ll notice that copper (Cu, 29) is under IB, then zinc (Zn, 30) is under IIB. Why do IB and IIB appear at the right side, while IIIB appears at the left?
Beginning with scandium, the 3d subshell is being filled, but 4s, in the shell above, already is. Scandium is 3d14s2. At nickel (Ni, 28), the 3d subshell has 8 electrons, the 4s, 2. However, at copper (Cu, 29), the 3d subshell gains two electrons to reach 3d10, while 4s drops to 4s1. Zinc has 3d104s2. Perhaps it’s the refilling of the outer s subshell that defines IB and IIB at the right side of the transition metals.
In the next period, silver (Ag, 47) has 5s1, while cadmium (Cd, 48) has 5s2. However, the filling of 4d happened back at palladium (Pd, 46).
One more period down, gold (Au, 79) has 6s1, while mercury (Hg, 80) has 6s2.
Mortimer, Charles E. Chemistry, 6th ed. Belmont: Wadsworth, 1986.
Jack of Oracle Tutoring by Jack and Diane, Campbell River, BC. |
The recent study of the environmental impact of certain genetically modified crops did not give the full picture (26 March, p 6). It only considered the impact on wildlife within the area under consideration and did not take account of differing crop yields.
As conventional farms are likely to produce less per hectare than GM farms, and organic farms around 20 per cent less again, in order to produce the same amount of food, land elsewhere must be cultivated. A full environmental impact study would take into account the greater amount of natural habitat lost when low-yield production methods are used.
The total global demand for food is rising because of population increase and higher living standards. The biggest threat to biodiversity, and the main cause of the increase in extinctions, is loss of habitat. The greatest reason for loss of habitat is land being converted to farming. Therefore low-yield ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. |
Chrissie spoke to us about using debate, group discussion and oral interpretation to help learners develop their speaking skills.
She informed us of research results concerning debate. According to the research, it improves formal functional language, develops critical thinking, prioritizing, listening, summarizing, expressing disagreement and extended discourse, expressing opinion and justifying. Group discussion in turn improves interactive communication, extended discourse, listening, expressing polite disagreement, facilitating discussion, summarizing, interrupting, research ,critical thinking. Oral interpretation (reading aloud) requires good phonology, body language, manner of expression and interpretation of text.
Then she took us to the children who participated .They were given, before they took part in debates and group discussions, useful language which they were to use. It gave us encouragement to see 11 year olds debating, very politely and using precise language to present and argue their points, to present their arguments logically and coherently and then participating in a group discussion, presenting results of research they had undertaken, asking for further defining… and all in excellent English.
By George Raptopoulos |
The most common birth defect seen in infants and newborns are related to the heart. The abnormality is seen in almost one in 100 pregnancies.
The diagnosis may be made during pregnancy or sometime after the birth of the baby. The diagnosis usually involves a paediatric doctor hearing a heart murmur. A heart murmur happens to be an abnormal heart sound. Once this is suspected, a cardiologist performs an echocardiogram, and a confirmation of whether the murmur is from an abnormality in the heart or is an innocent murmur can be made.
An innocent murmur is a murmur, which though present is not associated with a heart abnormality i.e. the heart is innocent and normal though there is a murmur. This is a fairly common situation. If an abnormality is noted, often it is a condition of the heart that does not require immediate treatment or surgery. Occasionally, the abnormality may warrant intervention or surgery.
Types of heart disease
There are two main types of heart disease: ones in which the baby turns blue and the ones in which the baby does not turn blue. Almost all conditions in which the baby turns blue require surgical treatment. For others, surgery may not be required at times, or it may be treatable by balloon angioplasty or device closure. Both these methods are non-surgical methods. All major defects require surgery to be done.
The most common defect of the heart is a ‘hole in the heart’. An isolated hole in the heart (which could be a ventricular or atrial defect) will require treatment. The hole is between chambers carrying red and blue blood (red signifying ‘with oxygen’ and blue signifying ‘without oxygen’). The condition of an isolated hole in the heart should not be confused with conditions where hole is present, associated with many other abnormalities in the heart, which have varying treatment and outlook.
The outlook of children with a hole in the heart is very good, irrespective of whether it is closed by surgery or intervention. Many conditions now can be treated without surgery. The closure of such defects can be done by an angioplasty technique similar to the one in adults to place stents.
Common problems in children include a hole between the lower chambers (called VSD). The wall between upper (ASD) or lower (VSD) chambers separates red from the blue blood. A hole would result in extra blood flow to lungs. This makes the child have more chest infections; the child gains weight with difficulty and feeding also becomes a problem.
On the other hand, the child could be blue when in addition to a hole in the heart there is a blockage of blood flow to the lungs (ToF). This is the most common condition in which the baby becomes blue. Such conditions always require surgery to be done. Other defects in which the child becomes blue include conditions where red blood from lung (with oxygen) drains abnormally into blue blood; or, the tubes coming out of the heart carrying red and blue blood get switched whereby the body receives blue blood wrongly and the lungs get red blood. These conditions usually require a single operation and the child becomes normal. Finally, it is in a condition when one of the valves of the pump is not normal that the child requires more than one operation in his/her lifetime, and may affect the quality of life or the life span.
Symptoms of congenital heart disease
Children may show symptoms in many ways when they have a birth heart defect. The most common are seen in infancy:
- In infancy, children may have several symptoms that parents may recognise as not being normal. For example, a child may take too long to feed. A child may sweat while feeding, or may not gain weight in spite of feeding. The child also may have a faster breathing rate, and rarely parents may complain that the child’s heart rate is faster.
- The child may appear blue. Sometimes they may gradually become bluer and reach a point where they may not be able to walk. The child may have to squat for some time before continuing to walk.
- A newborn baby may require oxygen and without that may not maintain a normal oxygen level.
- Occasionally, the child may show symptoms that require emergency treatment. This may happen more so in the newborn age group when the symptoms may vary between:
- A very blue baby
- A baby with low blood pressure or in shock
- A baby with very rapid breathing or breathing difficulty
- Older children may have fainting episodes, which may be the only indicator of a major underlying heart problem. These conditions can occasionally be fatal. This is referred to as an acute life threatening heart problem. These are electrical problems of the heart. A small child may not be able to verbalise what the problem is.
Sometimes, the heart disease may not be picked up till late in life and such symptoms are called adult symptoms of congenital heart disease. |
Stoichiometry Practice Worksheet Answer Key. Start filling out the blanks according to the instructions: Balancing equations and simple stoichiometry key balance the following equations.
Write and balance the chemical equation. Stoichiometry work 2 answers pdf stoichiometry work 1 answers chemistry as fun and games stoichiometry problem 2 final practice examination answer key. Stoichiometry tutorial easy step by step video review problems explained crash chemistry academy chemistry lessons science chemistry teaching chemistry.
Read All The Field Labels Carefully.
Some of the worksheets for this concept are stoichiometry unit grade 11 test pdf stoichiometry practice work chapter 6 balancing stoich work and key chemistry 11 stoichiometry work 2 answers pdf stoichiometry work 1 answers chemistry as fun and games stoichiometry. Worksheet answers student exploration stoichiometry gizmo answers key and explore learning student exploration stoichiometry answers stoichiometry. 2 naoh h 2 so 4 2 h 2 o na 2 so 4.
Mole Practice Worksheet 5 Molar Volume Of A Gas Molar Volume Science Lessons High School Scientific Notation.
Remember to pay careful attention to what you are given, and what you are trying to find. Modern chemistry chapter test a grant key. 2 c 6 h 10 17 o 2 12 co 2 10 h 2 o.
July 6, 2021 On Stoichiometry Worksheet 1 Mole To Mole Answer Key.
With a team of extremely dedicated and quality lecturers stoichiometry practice problems answer key will not only be a. Balancing equations and simple stoichiometry key balance the following equations. Chm 130 stoichiometry worksheet the following flow chart may help you work stoichiometry problems.
The H 2 H 2 O Ratio Of 2 2 Could Have Been Used Also.
05 101 how many grams of hydrogen are necessary to react completely with 500 g of nitrogen in the above reaction. Stoichiometry with gases wksht 3 problem 15. 2 so 4 2 h 2 o na 2 so.
Stoichiometry Practice Worksheet Answer Key.
The results for ideal gas law gizmo answer key. H 2 so 4 + 2 naoh na 2 so 4 + 2 h 2 o ans: 12 write the balanced equation for the reaction of acetic acid with aluminum hydroxide to form water and aluminum acetate. |
What’s the difference between a noun clause, an appositive, and the relative pronoun that cannot be used in nonrestrictive adjective clauses the relative. I think both of your examples are relative clauses which could what is the difference between appositive clauses and some the clause beginners of. Non-restrictive, restrictive, participle, and appositive restrictive, participle, and appositive phrases adjective clauses can also be called relative clauses. In the following sentence identify the appositive or appositive phrase and the noun or pronoun renamed by identify all the relative clauses in the following. I'm kind of confused by relative and appositive clauses firstly, as far as i understand relative and attributive clause is the same thing, isn't it i've been. Other phrases: verbal, appositive, absolute a phrase is a group of words that lacks a subject, a predicate (verb), or both the english language is full of them.
Relative indefinite demonstrative interactive clause quiz #1 b appositive c restrictive clause d non-restrictive clause. We'll look briefly at eight uses of that in this section of the lecture because it is the clause in this sentence a relative clause or an appositive clause. Appositive and parenthetical relative clauses tim stowell ucla 1 appositive versus restrictive relatives appositive relative clauses differ from restrictive relative. Restrictive vs non-restrictive relative pronouns relative clauses are also classified relative clauses vs appositive a relative clause includes in its.
Relative clause this is a clause that generally modifies a noun or a noun phrase and is often introduced by a relative pronoun (which, that, who, whom, whose. Appositives (restrictive and non-restrictive) what is an appositive an appositive is a noun or noun phrase that immediately follows and.
“non-restrictive appositive” vs “non-defining relative clause or a non-defining relative clause (a) does the appositive in this sentence need to be set. At-issue proposals and appositive impositions in discourse itive relative clause 1since we are dealing almost exclusively with appositive relative clauses rather.
An appositive is a word placed after another word to explain or identify it the appositive always appears after the word it explains or identifies.
Global warming, a phenomenon that most scientists agree is caused by humans, will soon make humans pay global warming, which most scientists agree on as. Appositive clauses can be related to particulate verbs test your knowledge by deciding which of these sentences has an appositive clause and which has a relative. Relative clauses need to be distinguished from a second type of finite clause which can postmodify a noun: the appositive clause this looks very similar to a. An appositive is essentially a modifying clause from which a relative pronoun and a linking verb have been removed appositives are commonly used for combining ideas. Using commas with nonrestrictive relative clauses deciding between restrictive and nonrestrictive relative clauses. Restrictive vs non-restrictive elements a non-restrictive appositive is one where the noun that is then the following relative clause will be restrictive. Lot summer school 2005 universiteit leiden the syntax and semantics of nominal modification june 17, 2005 appositive relative clauses 1 some basic properties.
Non-restrictive relative clauses (also known as non-defining relative clauses) provide non-essential information about the antecedent in the main clause. Appositive sentences and the structure(s) of coordination gabriela matos contrasts of appositive vs restrictive relative clauses thus, as it is well known. What is the difference between appositive and adjective clause appositives define, rename or describe the noun or pronoun adjective clauses describe or. – there is no appositive there is a relative clause: whose name is alice smith examples on the apposition vs double subject issue in romanian. The goal of this paper is to compare appositive relative clauses (henceforth arcs) to other structures that convey the same information, in order to determine the. Attributive clause vs appositive clause discussion in 'english only' started by gloriazz now, some people use attributive clause to mean relative clause. |
Autism Spectrum Disorder (ASD) is a large and diverse group of neurodevelopmental disorders that can affect a person’s functioning in several different areas. These can include social interaction, communication, and behavior.
According to the Center for Disease Control (CDC), approximately one out of every 68 children in the United States has an autism spectrum disorder diagnosis. This makes it almost five times more common among boys than girls, although recent evidence suggests that this may be changing.
The symptoms of autism spectrum disorder can be classified into two separate categories: core symptoms and associated symptoms.
Core symptoms include social deficits and communication difficulties, such as problems with nonverbal cues and interactions or trouble transitioning from one activity to the next. For example, a child may be able to engage in a conversation but fail to ascertain when someone else is done speaking.
Associated symptoms include repetitive and restrictive patterns of behavior, such as compulsive eye contact or hand flapping; these behaviors are often referred to as “stimming.”
The specific diagnosis of autism spectrum disorder, in accordance with the DSM-5, is defined by two categories: social communication deficits and restricted and repetitive behaviors.
Treatment plans usually involve therapy and medication to help manage the symptoms of autism spectrum disorder. Although there is no cure for autism, early treatment can help children with this condition make more progress than those left untreated.
Medication has two main uses: to address associated symptoms of autism such as hyperactivity or self-injurious behavior (SIB) or to address specific symptoms of autism-like social deficits.
Atypical antipsychotics are the most common medications used to treat associated symptoms of autism, such as aggression, agitation; they are also prescribed for behavioral problems like SIB.
Commonly prescribed atypical antipsychotics include risperidone, clozapine, and aripiprazole.
Of these drugs, risperidone (trade name: Risperdal) is the only one approved for the treatment of irritability in autistic children by the Food and Drug Administration (FDA).
Children with autism spectrum disorder who take atypical antipsychotics may experience such side effects as weight gain, sleepiness or drowsiness, and increased saliva.
Medications used to target specific symptoms of autism spectrum disorder include selective serotonin reuptake inhibitors (SSRIs), which can be used to address anxiety and depression; they affect serotonin levels in the brain and can reduce repetitive and self-injurious behaviors associated with autism.
Another type of antidepressant, selective serotonin and norepinephrine reuptake inhibitors (SSNRIs), also known as serotonin-norepinephrine reuptake inhibitors (SNRIs), can help children who experience anxiety, depression, and irritability as a result of autism spectrum disorder.
Common types of SSNRIs include duloxetine (Cymbalta), venlafaxine (Effexor), and desvenlafaxine (Pristiq).
The FDA has not approved any medications for the treatment of autism spectrum disorder, although it has approved risperidone for irritability.
Some argue that autism spectrum disorder has become an umbrella term to justify prescribing pharmaceutical drugs, and others argue that there is a growing epidemic of the condition as a result of environmental factors.
This controversy exists because many children who receive an ASD diagnosis do not entirely fit the normal diagnostic criteria; they may only display some of the symptoms of autism, or they may meet most of the DSM-IV criteria but not all.
Groups like Autism Speaks advocate early intervention for children who are diagnosed with ASD, and treatment plans often begin with therapy to help children adapt before prescribing medication.
Some research suggests that behavioral intervention might be just as effective as medication in treating autism spectrum disorder; other studies show that children who receive treatment with medication and therapy use fewer resources (and thus, save money) than those left untreated.
Medication saved my life and thanks to my wonderful Psychiatrist we found that sweet spot where I am on the perfect dosage to function optimally. |
View from B-Deck, one deck up.
Savannah has a pressurized light-water reactor (PWR). First the reactor core generates heat, second high pressure water in the primary coolant loop carries the heat to the steam generator, third inside the steam generator heat from the primary coolant loop vaporizes the water in a secondary (low pressure) loop producing steam, forth the steam line directs the steam to the main turbine in the engine room causing it to turn the turbines which turn the shafts and propellor. Another turbine drives a generator which produces electricity. Finally, The unused steam goes to the condenser where it is condensed into water. The resulting water is pumped out of the condenser, reheated, and pumped back to the steam generator.
The reactor core contains fuel assemblies which are cooled by high pressure water (1750 PSIA), which is circulated by electrically powered pumps (the primary cooling system). The amount of heat generated is controlled by the position of the control rods (boron steel rods that can be raised or lowered between the fuel rods) and the temperature of the primary cooling loop water (higher temperature yields lower power).
Inside Reactor Containment Vessel Forward of Reactor can be seen the large reactor pressure vessel (painted in orange). Inside the reactor are the fuel and control rods that are used to generate heat and control the nuclear chain reaction. Around the rods circulates the high pressure, high temperature (around 500 degree) water of the primary cooling loop.
On the port side is one of the two steam generators on top connected to its heat exchanger at the bottom by downcomers and risers (all painted in purple). The primary loop water circulates through small tubes in the heat exchanger. Around it is the water that is heated to create steam that powers the turbines in the engine room.
Forward of reactor and steam generators is the base of the pressurizer (in grey.) It is a large tank to allow room for the expansion and contraction of the water in the primary loop. Forward on port is an effluent condensing tank (in white.)
In the illustration above the primary (high pressure, radioactive) water cooling loop is in red. The secondary (low pressure, non-radioactive) water and steam loop is in blue. |
To the general public, the idea that there are tiny organisms living all around (and inside) of us might be a scary concept. Naturally, if all the news you get on a regular basis is concerning the totally-terrifying E. Coli that can give you food poisoning, or that fiendish-foe influenza–it’s not surprising that people often have negative reactions to the term “microbe.’ The reality is that we’re mostly made up of microbes, we encounter them every day, and most of the time they’re harmless or even beneficial! In fact, we often use microbes to ferment sugars so we can make things like yogurt and bread, and just as we use these microbes for our own benefit–plants can do the same!
Most information on beneficial plant microbes concerns soil microbes–specifically, mycorrhizae, which are fungi that provide the plant with increased nutrient and water absorption in return for some of the carbohydrates it makes during photosynthesis. Mycorrhizae and other soil microbes associated with plant roots are part of what is known as the rhizosphere–the small area that is influenced by and surrounds the roots of the plant (Mendes et al. 2013).
While we possess a wealth of information regarding the rhizosphere, the phyllosphere (the above-ground/aerial parts of the plant that can be colonized by microbes, known as epiphytes) is not as well-understood (Lindow and Brandl 2003). This might surprise you if you think about just how many plants there are in the world, and the magnitude of their combined surface areas!
But hark–do not despair! Phyllospheric studies (specifically those which are culture-independent, revealing previously hidden diversity due to the fact that we can only culture around 1% of microbes) have been progressing (Smalla 2004), (Muller and Ruppel 2013)! We know that the composition of the phyllosphere is different from that of the rhizosphere and the vast majority of the phyllosphere is made up of bacteria (Lindow and Brandl 2003). We also know that more pigmented bacteria colonize the phyllosphere than the rhizosphere–presumably due to the influence of solar radiation–and that common rhizosphere colonizers can fail to become established on the phyllosphere (Lindow and Brandl 2003).
It is also known that epiphytic bacterial populations differ in size among and within plants of the same species, as well as in close proximity, over short time scales, and over the growing season (Lindow and Brandl 2003). The species of plant itself also plays a role in determining epiphyte community composition, as different plants have different size leaves (ie: different carrying capacities), different levels of leaf-waxiness, and a whole range of other factors that can influence epiphyte community composition (Whipps et al. 2008).
Recently, researchers in Norway conducted an experiment to elucidate how the bacterial community composition on leafy greens develops over time (Dees et al. 2014). They believed there would be a change in the epiphyte composition of the phyllosphere due to a decline in nutrient supply following leaf maturation, as well as weather effects and/or the selection of specific epiphytes by different types of leafy greens (lettuce mainly). Specifically, they wanted to determine the change in epiphyte community structure throughout the growing season (April-September) for two kinds of leafy greens.
The researchers concluded that bacterial richness in lettuce was significantly greater 3 weeks after planting rather than at harvest (Dees et al. 2014). They came to this conclusion due to a significantly (P = 0.002) higher number of operational taxonomic units (OTUs) from lettuce samples collected at three weeks after planting than from samples collected at harvest (as can be seen in Fig 1; where the bars represent the 95% confidence intervals).
You might be wondering: ‘why the heck does more OTUs equal more bacterial richness?’
Well, in this study, the authors were using genetic sequencing as a way of studying the bacterial epiphytes associated with the phyllosphere of these leafy greens. As you may (or may not) know, all living organisms possess deoxyribonucleic acid (DNA), and by comparing DNA sequences we are able to determine how closely related species are to one another–the science of doing so is referred to as phylogenetics. In phylogenetics, OTUs are made by grouping sequences or groups of sequences that are most similar to each other (Van de peer 2009). So, having a lot of OTUs means you have many sequences that aren’t similar to one another–which gives us a decent measure of diversity because it means there were many DNA segments from organisms that weren’t closely related. Other studies have also shown that younger leaves are composed of a greater number of taxa than those of more mature leaves, so this finding isn’t entirely surprising (Lindow and Brandl 2003); however, the authors did also note that the finding wasn’t consistent in all of the samples.
Specifically, they mention that the second planting (at both locations) possessed similar richness at 3 weeks and at harvest, as well as similar microbial profiles. They suggest that this may partly be due to the fact that they observed a higher temperature (at both locations) during harvesting of the second planting, than they observed for the harvesting of planting 1 or 3 (as can be seen in Fig. 2).
The authors statistically evaluated the OTUs of the lettuce samples to determine main bacterial distribution at 3 weeks after planting and at harvest, and found that the most abundant phyla at 3 weeks were different from the most abundant phylum found at harvest. If you take a second here and think back to biology classes and how living things are classified based on their taxonomic rank (kingdom, phylum, class, order, family, genus, species)–this is a striking finding because it essentially means that the bacterial communities present on the young 3 week old lettuce were completely different than those on the lettuce at harvest.
Most of the questions I have after reading this paper mainly regard the methods that were employed in the study–had I been conducting this study I would have restructured some of the methodologies employed. For example, the two farms used different water sources, had different methods of planting (bare-soil vs. plastic covered beds), different planting dates, etc. I would have liked to have seen more time points in this study, as well as synchronized planting, sampling, and harvesting days. In this regard, I suppose my questions concern how these different variables affect the phyllosphere communities, and by what degree.
Why didn’t you synchronize the planting (and sampling ) dates between the farms? What was the reasoning behind the time-points chosen/why not more?
I believe taking a look into the different possible sources of inoculation (water, soil, sampling protocol, sanitization protocols, etc) could provide an interesting glimpse into phyllosphere ecology. Additionally, the authors of this paper stated that one of the reasons for conducting this study was to determine the potential for human pathogens to be spread through leafy greens; however, the only mention of pathogens made in the paper was to state that they did not find any trace of E. Coli or Salmonella. I would go about asking them why they didn’t do any further investigations into the matter. I would also be interested in investigating what effect, if any, can be seen on plant development after they’ve been inoculated with certain human pathogens.
For a short yet excellent overview of research that has been conducted on phyllosphere microbiology, I’ll refer you to The Microbiology of the Phyllosphere by Steven E. Lindow, and Maria T. Brandl (2003).
If you’re interested in reading further about culture-independent advancements that have been made in phyllosphere microbiology, I recommend reading Progress in cultivation-independent phyllosphere microbiology, by Thomas Muller and Silke Ruppel (2014).
A superb article concerning the diversity of phyllosphere microbiology and its relation to plant genotype is Phyllosphere microbiology with special reference to diversity and plant genotype, by J.M. Whipps, P. Hand, D. Pink and G.D. Bending (2008).
- About the Human Microbiome. (n.d.). Retrieved from https://hmpdacc.org/hmp/overview/
- Dees, M. W., Lysøe, E., Nordskog, B., & Brurberg, M. B. (2014). Bacterial Communities Associated with Surfaces of Leafy Greens: Shift in Composition and Decrease in Richness over Time. Applied and Environmental Microbiology,81(4), 1530-1539. doi:10.1128/aem.03470-14
- Lindow, S., Brandl, M. (2003). Phyllosphere microbiology: A perspective. Microbial Ecology of Aerial Plant Surfaces,1-20. doi:10.1079/9781845930615.0001
- Microbes and food — Producers. (n.d.). Retrieved from https://microbiologyonline.org/about-microbiology/microbes-and-food/producers
- Mendes, R., Garbeva, P., & Raaijmakers, J. M. (2013). The rhizosphere microbiome: Significance of plant beneficial, plant pathogenic, and human pathogenic microorganisms. FEMS Microbiology Reviews,37(5), 634-663. doi:10.1111/1574-6976.12028
- Müller, T., & Ruppel, S. (2013). Progress in cultivation-independent phyllosphere microbiology. FEMS Microbiology Ecology,87(1), 2-17. doi:10.1111/1574-6941.12198
- Plants and Mycorrhizae. (n.d.). Retrieved from https://sweetgum.nybg.org/science/glossary/glossary_details.php?irn=1074
- Smalla K. 2004. Culture-Independent Microbiology, p 88-99. In Bull A (ed), Microbial Diversity and Bioprospecting. ASM Press, Washington, DC. doi: 10.1128/9781555817770.ch9
- Successful soil biological management with beneficial microorganisms. (n.d.). Retrieved from https://www.fao.org/agriculture/crops/thematic-sitemap/theme/spi/soil-biodiversity/case-studies/soil-biological-management-with-beneficial-microorganisms/en/
- Taxonomy. (n.d.). Retrieved from https://basicbiology.net/biology-101/taxonomy
- Van de Peer, Y. (2009). Phylogenetic inference based on distance methods: theory. In P. Lemey, M. Salemi, & A.-M. Vandamme (Eds.), The phylogenetic handbook’¯: a practical approach to phylogenetic analysis and hypothesis testing (pp. 142—180). Cambridge, UK: Cambridge University Press.
- What is DNA? – Genetics Home Reference – NIH. (n.d.). Retrieved from https://ghr.nlm.nih.gov/primer/basics/dna
- Whipps, J., Hand, P., Pink, D., & Bending, G. (2008). Phyllosphere microbiology with special reference to diversity and plant genotype. Journal of Applied Microbiology,105(6), 1744-1755. doi:10.1111/j.1365-2672.2008.03906.x |
Governments, school boards and Indigenous communities have identified Indigenous education as a key priority, identifying specific educational goals to ensure the educational success of Indigenous learners. There is growing educational policy at local, provincial, territorial, and national levels that tell us there needs to be change in the way we design, deliver and assess learning opportunities for Indigenous learners. Educators need to be able to respond to education reform that prioritizes improved educational outcomes for Indigenous children and youth. Reconciliation is a focal point for building and sustaining respectful relationships among Indigenousand non-Indigenous peoples in countries such as Canada and Australia, with relevance to the US and New Zealand. Through the lens of reconciliation participants of this course will engage with educational leaders and resources that provide direction for how education programs and teaching practices can be modified in order to meaningfully integrate Indigenous knowledge worldviews and pedagogies in classrooms, schools and communities. Teachers, administrators, staff, community educators, researchers and young people will come to see that changing our education practices, requires changing our ideas with benefits for all learners. |
Seeking to rally Americans to the war effort in 1917, President Woodrow Wilson promised a “war to end all wars,” and pledged to “make the world safe for democracy.” Making the world safe for democracy seemed a noble and just pursuit to Americans who watched as Europeans and Russians struggled with increasingly limited freedoms and leaders who acted out of vengeance, creating economic turmoil.
President Wilson believed that the United States should serve as a moral compass to the rest of the world. He differentiated the United States’ goals in the war from the goals of the other warring powers. To Wilson, the United States had not entered the war with the hope of gaining wealth or territory; instead, Americans entered the war to shape a new international climate and to ensure the well being and continued growth of democracy. Wilson’s campaign succeeded with the American public. On the home front, Americans responded to Wilson’s idealistic aims and rallied behind him and the war effort.
During the summer and fall of 1917, large numbers of U.S. troops arrived in Europe to support the Allied Powers. About two million Americans served overseas and about 75 percent of those saw combat action during the next 18 months.
America’s troops arrived as the peril in Europe increased. Russia, reeling from a revolution, established a separate peace with Germany in 1918 and pulled out of the war. With Russia no longer a threat to the Central Powers, Germany began moving troops to the war’s western front for a major offensive move into Allied territory. Fresh American troops arrived in France just in time to be catapulted against the German advance and hold off the German armies.
American troops also played an important role in the last Allied assault that took place in France in the fall of 1918—one major objective of this offensive was to cut off the German railroad lines feeding the western front. The Meuse-Argonne offensive, which at the time was the largest battle in American history, lasted 47 days and engaged 1.2 million American troops. Although more than 120,000 American troops were wounded or killed, this triumph paved the way for Allied victory.
As the war drew toward its conclusion, many began to consider what would be the outcome. Recognizing the need for a plan, Wilson devised an outline for peace that would become known as his Fourteen Points.
On January 18, 1918, Wilson delivered his Fourteen Points Address to Congress to encourage the Allies to victory. In it, he hoped to keep a reeling Russia in the war and to appeal to the Central Powers’ disenfranchised minority members. The points, which represented Wilson’s lofty goals for the future of the world, included five general principles for a peace settlement: (1) “Open covenants of peace, openly arrived at” should replace secretive diplomacy; (2) a guaranteed freedom of the seas should exist during wartime and peace time; (3) nations should be able to trade freely without fear of retribution; (4) armaments should be drastically reduced; and (5) colonial claims should be adjusted to reflect the needs of native peoples.
Most of the additional points involved specific territorial adjustments: lost territory should be returned to Russia, Belgium should be a free and independent state, France should regain the Alsace-Lorraine, and Italian boarders should be adjusted along easily recognizable lines of nationality. Under the Fourteen Points, oppressed minority groups such as the millions of Poles who lived under the rule of Germany and Austria-Hungary, would benefit from an era of self-determination. The final point called for the creation of a “general association of nations” that would work to guarantee political independence and sovereignty for all countries. This general association, an early version of Wilson’s League of Nations, would provide international order in the post-war era.
Although the reaction to the Fourteen Points was largely positive, some leaders of the Allied Powers, hoping for territorial gain, grumbled at Wilson’s idealistic aims. Republicans at home who favored isolationism openly criticized Wilson’s world vision and mocked what they referred to as the “fourteen commandments.”
Nearly one year after President Woodrow Wilson addressed Congress and laid out his Fourteen Points, fighting in Europe had reached its end. In the last weeks of the war, Wilson used the promise of his Fourteen Points to persuade the German people to overthrow Kaiser Wilhelm II and establish an armistice. Under the armistice, Germany had to withdraw behind the Rhine River and surrender its submarines and munitions.
To establish the conditions of surrender for the defeated Central Powers, members of the Allied Powers came together in Paris. Representatives of the Big Four powers—the United States, France, Britain and Italy—attended the conference. Fearing his Fourteen Points would not be well received by European leaders with their own agendas, Wilson attended the conference as the leader of the American delegation. Wilson’s aim was to create a world parliament to be known as the League of Nations, an agency that would ensure international stability.
Wilson’s fear over the reception of his Fourteen Points proved to be well founded. Although Wilson was a popular figure, many European leaders felt his plans would interfere with their imperialistic ambitions. The English were mostly interested in the expansion of the British Empire, and the French wanted solid assurances that France would never be invaded by Germany again. Millions in Europe rejected the idea that there could be peace without retribution against Germany—the cry of vengeance resounded throughout the Allied European nations, and they demanded that Germany pay for its actions. Wilson, temporarily disheartened, left Paris without solidifying any specific agreement to help aid the Democratic Congressional campaign.
During the Congressional election of 1918, Wilson faced a new battle on the home front. Republicans and Democrats had minimized open partisan politicking during the war. Wilson broke the bi-partisan truce to plea for a Democratic victory in the Congressional elections of 1918. Wilson’s move backfired when Republicans won majorities in both houses. Wilson, who had staked his prestige on a Democratic victory, returned to Europe as a less influential leader.
From January to May of 1919, the Allied Powers hammered out the treaty. To preserve his prized League of Nations, Wilson made sacrifices on many of the other 13 points. Although the Allied victors would not take control of the conquered areas outright, they would be allowed to oversee the territories under the guise of the League of Nations.
Under Wilson’s plans, the League of Nations was to consist of 42 Allied and neutral countries, with five permanent members: the U.S., France, Britain, Italy, and Japan. Wilson’s concessions led to the establishment of the League Covenant, a constitution for the League of Nations. Under the Covenant, the League’s chief goal was collective security among all nations. The Covenant required all League members to protect the “territorial integrity” and “political independence” of all other members.
Signed on June 28, 1919, the Treaty of Versailles outlined several provisions for peace. A “guilt-war” clause, clause 231, placed sole blame for the war on Germany and required Germany to pay reparations to the Allies, which totaled about $33 billion. The treaty required Germany to accept military restrictions and a loss of territory and barred Germany from joining the League of Nations. The Treaty also granted national sovereignty to Poland, Czechoslovakia, Finland, and the Baltic States of Latvia, Lithuania, Estonia, and Yugoslavia.
Germany, which had capitulated based on assurances that it would be granted a peace based on the Fourteen Points, felt betrayed by a treaty that only included about four of Wilson’s original points.
The treaty, however, did little to advance Wilson’s quest to establish freedom of the seas, free trade between nations, and military disarmament. Always the optimist, Wilson believed that such oversights could be easily addressed through the powers of the League of Nations. He believed that once convened, the League would have the authority to solve these problems through arbitration and negotiation.
When Wilson went to Europe to fight for his Fourteen Points and negotiate the Treaty of Versailles, he was largely viewed as a worldwide hero. Once the treaty was signed and he returned to America, he was greeted with a cold reception. American isolationists feared greater international entanglement through participation in the League of Nations. Anti-German critics believed the treaty did not go far enough to punish Germany, while many liberals found the treaty too harsh and heavy-handed toward the German people. With opposition in America, the treaty faced a difficult road toward ratification in the U.S. Senate.
President Woodrow Wilson felt optimistic about returning to America with the completed Treaty of Versailles. His return, however, was marked with a mixed reaction from the public and the Congress. Initially, Republican Senator Henry Cabot Lodge, who had ardently opposed the treaty, had little hope of defeating it in the Senate. Instead, Lodge and other Republican Senators hoped to amend the treaty, so that they could take credit for the changes. These individuals were known as “reservationists,” since they were willing to accept the treaty with modifications. If someone was opposed to the idea of the U.S. moving toward internationalism altogether, then they were called “irreconcilables.” Lodge’s delay tactics, which included reading the 264-page treaty aloud in a committee meeting, helped to muddy the once-favorable pubic opinion.
Wilson was concerned that any modification to the Treaty by the Senate would encourage the Europeans allies to also make modifications, and he was afraid that too many amendments would lead to elimination of his League of Nations altogether. To galvanize public support for the treaty, Wilson began a speechmaking tour in spite of the urging of his wife and physicians to stay home. Republican “irreconcilables” such as Hiram Johnson of California and William Borah of Idaho followed behind Wilson and made speeches against the treaty at every stop Wilson has been. Although the Midwest reacted coldly to Wilson's pleas, he experienced tremendous support in the Rocky Mountain region and the Pacific Coast, two areas where he also had a solid political base. During a speaking stop in Colorado, Wilson collapsed from physical and nervous exhaustion. Taken by train back to Washington, Wilson had a stroke a few days later that paralyzed one side of his body. Wilson recovered in the privacy of the White House for the next seven months.
With Wilson removed from the political spotlight, Lodge took control of the treaty debate. Although Lodge was unable to amend the treaty outright, he mockingly created Fourteen formal reservations, known as the “Lodge Reservations,” to it—a reference to Wilson’s Fourteen Points—and attached the reservations to the treaty for all to review before they voted whether or not to pass it. Lodge and other critics had particular disdain for Article X, which morally bound the United States to aid any League member who was victimized by external aggression. Rather than morally bind the government to act, Congress wanted to reserve the power to declare war for itself.
Wilson, who had little respect for Lodge, rejected the Fourteen Reservations outright. Although Wilson was willing to accept some compromises, he believed that Lodge’s reservations contradicted the pact’s spirit. Wilson sent word to all Democrats to vote against the treaty, which now included Lodge’s reservations. In November of 1919, loyal Democrats, who had once strongly supported the treaty, voted against ratification.
In March of 1920, strong public support of the treaty required the Senate to once again vote on the treaty. Again, Wilson asked Democrats to vote down the treaty with Lodge’s reservations attached. For a second time, the Senate voted against ratification, thereby ending any chance for the treaty’s ratification in America and creating a deadlock in Washington.
Wilson believed that the Election of 1920 would serve as a “solemn referendum” on the Treaty of Versailles and the League of Nations and eliminate the political impasse the country faced. Since Wilson did not run for another term, Democrats nominated James M. Cox from Ohio, a strong supporter of the League. Republicans, hoping to bring a sense of “normalcy” back to the country, nominated Warren G. Harding, a Senator from Ohio who remained intentionally ambiguous about the League.
Although Democrats attempted to make the election a referendum on the League, the public had grown tired of high-browed idealism and turned to Harding’s message of normalcy. In the end, a Republican landslide elected Harding as President. Republican isolationists turned the election’s results into a mandate against the League of Nations. American participation in the League, Wilson’s long-held dream, would not be a reality. Because of the U.S.’s refusal to enter the League, it never had the power that Wilson had envisioned.In July of 1921, Congress officially ended the war with the Central Powers by passing a joint resolution. Separate peace treaties with Germany, Austria, and Hungary were ratified on October 18, 1921.
You just finished Peace Conferences. Nice work!
Tip: Use ← → keys to navigate! |
Online activities and videos for supporting students' understanding of math concepts such as ratios, scale factors, fractions, number lines, and so on. Some activities may need additional language learning supports (i.e. for beginning English learners). Videos are also available in Spanish and in a printable "comic book" form.
What is an instructional progression that leads to conceptual understanding of counting and numbers? This 8-minute video lays out a progression for children learning these concepts, but ABE teachers of students with beginning numeracy will find useful ideas here.
College & Career Readiness (CCR) Math Standards
Looking for more specific information about the College & Career Readiness (CCR) Math Standards? Check out the CCRS Math Resources section of the CCR Standards resource library! |
Similar ideas popular now
FREE Vocabulary Flip Book for Marzano’s Six Steps: Keep this flip book handy when planning vocabulary instruction! A few ideas for implementation are listed under each step. There is also space on each page for you to add your own notes and ideas. The flip book lists Marzano’s six steps in the order in which they should be taught.
Students need repeated exposures to master content vocabulary in science, social studies and math, but there is not enough time in the day. Make your language arts block work overtime and get more bang for your buck with these academic vocabulary word work activities for science, social studies, and math. Perfect for stations, the no prep format makes it easy to implement. Help your students work smarter not harder! | Math, Science, Social Studies, Vocabulary |
In addition to the most important man-made greenhouse gas, carbon dioxide (CO₂), there are other greenhouse gases such as methane or nitrous oxide. The various gases do not contribute to the greenhouse effect to the same extent and remain in the atmosphere for different periods of time.
In order to make the effects of different greenhouse gases comparable, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations has defined the so-called "Global Warming Potential". This index expresses the warming effect of a certain amount of a greenhouse gas over a set period of time (usually 100 years) in comparison to CO₂ . For example, methane’s effect on the climate is 28 times more severe than CO₂, but it doesn’t stay in the atmosphere as long. The environmental impact of nitrous oxide also exceeds that of CO₂ by almost 300 times. The anthropogenic source of these greenhouse gases can be found in agriculture through the use of nitrogen fertilizers and livestock farming. In this way, greenhouse gases can be calculated as CO₂ equivalents. CO₂ equivalents are abbreviated with "CO₂e".
You can find further exciting information on the subject of climate change and climate protection in our climate booklet |
Discover how different wavelengths allow scientists to measure the distance to the stars.
- Light waves vary in length and in color.
- The shortest visible light wavelength is violet, and the longest is red.
- Redshift occurs when light from distant galaxies travels through expanding space.
- As stars move away from the Earth, the light waves are stretched and appear redder, allowing astronomers to calculate the distance.
Light comes in different wavelengths
We see different wavelengths as different colors
The shortest visible wavelength is violet
The longest visible wavelength is red
Redshift is when light from distant galaxies or stars appears redder
As light travels through expanding space it gets stretched or as galaxies or stars ...
Please log in to view and download the complete transcript. |
Subitizing is a critical (and often overlooked) skill in early math development. With this article, we’ll teach you what it means to subitize, why and how to teach subitizing, as well as also provide subitizing activities for preschoolers. It’s one step to offering your child a strong foundation in number sense!
All About Teaching Subitizing to Preschoolers
Have you ever looked at your preschoolers and felt like they were just missing some pieces to the number puzzle?
Subitizing is one way that can help them with this.
You might be wondering what subitizing is or what it means to subitize. Or you might be wondering why it’s important for kids to learn this early math skills! We’ll explain all about it in our article today – come check out how easy mastering these smaller numbers really feels when everything fits together perfectly…
What is Subitizing?
Subitizing is the ability to identify small amounts of items without having count them. It is instant recognition.
Subitizing is like the sight words of mathematical world.
Here are some examples of subitizing — where we instantly see the quantity without having to count first.
- Pips on a die
- Shoes come in pairs
- Cupcakes in a box at the market
- Recognizing tally points
- Wheels on a tricycle
- Animal feet on quadrapeds
Most children do not need to count “one, two, three” to know how many are in a set. Subitizing greater numbers requires seeing clusters within sets. Even practiced adults seldom subitize sets greater than four or five.
Why is Subitizing Important?
Subitizing is a way for children to recognize numbers and their patterns. When we teach kids how to subitize, it helps them become more efficient thinkers when doing math!
Strategic and efficient thinking is something that can be developed early on with toddlers, preschoolers and kindergarten students.
There are a number of advantages involved for students who develop skills in subitizing; such as saving time, developing more complex number and counting skills, as well as improving their ability to deal with more complex number problems into the futureSource
What Research Tells Us About Subitizing and Mathematics Success
Subitizing is sometimes referred to as quantification.
Quantification is the ability to recognize that all numbers are associated with an exact quantity, and the ability to recognize sets of objects, such as pips on a dice.
Beginning near the turn of the 19th century, educators began promoting that simply counting did not demonstrate a true understanding of number and its related quantity, but that subitizing did. This was supported by experts who later stated that subitizing focused on the whole of the number as well as its parts in its most basic units. (Remember those “clusters” referred to earlier in this post?)
By the 1970’s, it was expected that most children could naturally subitize because it happened so readily in their natural environment, however that was not, and is not, the case for all children.
Some babies as young as six months, and even birds, have been found to have an ability to subitize. Doug Clements sites the example that a six month old baby may is shown two images at the same time, on of two dots and the other of three dots.
Then, the baby hears three beats on the drum and his eyes move to the image in front of him with the three dots. Obviously, this baby is not literally counting 1-2-3, but discriminating between two quantity sets.
But a counter argument exits that children use subitizing more as a shortcut for counting (Beckwith and Restle 1966; Brownell1928; Silverman and Rose 1980). After repetitive practice students may no longer need the to count the pips on a dice, but can automatically recognize their value, thus demonstrating more of a form of rapid counting.
Researchers are still at odds as to whether or not subitizing is a skills than comes before counting, however it is understood among the educative community that subitizing is something that can be taught and has a positive impact on number sense skills, which is the strongest indicator of mathematical success.
Students can use pattern recognition to discover essential properties of numbers, such as conservation and compensation. They can develop such capabilities as unitizing, counting on, and composing and decomposing numbers, as well as their understanding of arithmetic and place value—all valuable components of number sense.Source
Why Teach Subitizing?
Subitizing can be a great way to improve your student’s math skills in number sense. Here are some of the advantages frequent and repeated practice will give children.
It Saves Time
Subitizing numbers is a skill that saves time through not having to count each individual member of group, but instead by simply perceiving the number immediately. This comes in useful later on when students begin dealing with more complex or mathematical operations (Reys et al., 2012).
It’s a Precursor for More Complex Number Sense Skills
Early number order relations link directly to subitizing skills. A child who is able to competently name small groups will have an easier time understanding number facts such as that 3 > 2 and that 4 is one less than 3. This complex understanding of numbers facilitates learning of other mathematical processes as they go through their schooling.
It Helps Develop More Complex Counting Skills
Students who can subitize small groups of numbers are able to develop their counting skills by beginning their counting after the subitized group. For example, by counting on from the subitized total. Children can then use subitizing to count forwards or backwards by twos, threes, or even larger groups later when they are exposed to more complex multiplication tables. (Reys, et al., 2012)
This type of subitizing falls into the category of conceptual subitizing which occurs with larger number sets, and involves breaking the group into smaller parts (Clements, 1999).
It Makes Addition and Subtraction Easier
When children are able to subitize, it means that they are better equipped to handle addition and subtraction concepts. Children with solid number sense learn that grouping numbers together helps us determine the total, and it’s an introduction to addition!
When children can subitize small sets, they do not have to count each small group to be added or removed when learning operations with manipulatives.
It’s a Life Skill
Much like the importance of being able to calculate estimates, subitizing is something that comes up in the everyday lives of children. The easiest example of this is counting pips on a die: when you roll a six, chances are you don’t actually count the pips. Rather, you have come to recognize the pattern of three rows of two as being equal to six.
Activities to Teach Preschoolers to Subitize
More and more research is supporting teaching subitizing in preschool and kindergarten. Here are some easy preschool subitizing activities, as well as some kindergarten activities too.
Quick Dot Cards
- Use colored dot stickers to make a set of cards with a set number of dots on each card.
- For beginners, dots should follow the most easily recognized pattern, like the pattern of pips on dice.
- Select a card and flash it before the children for no more than three seconds.
- The goal is to recognize the set as quickly as possible without having to actually count.
Quick Dot Images Look Alike
- Play the same activity as above, but instead of children calling out the numbers, have them use manipulatives to create the same set on a tray or table.
Quick Dot Concentration
- Make a double set of cards and have students play the matching game concentration.
- If playing with young children, such as preschoolers through first grade, consider using number sets up to three, but in multiple color sets.
- This way, the students gets lots of practice with the smaller numbers without getting too frustrated with trying to differentiate the larger sets.
- Once near mastery is evident, larger numbers can be added, as well as less familiar configurations.
Dot Cards Missing Number
- Line up a set of three dot cards with quantities on a tray, all facing down.
- Flash each card to the children for three second each, one at a time.
- Turn over two of the cards and name the quantities as quickly as possible.
- Flash the last card for no more than three seconds and have the children identify the quantity.
Printables for Teaching Subitizing
Apple Drop Counting teaches preschoolers about composing ten, as well as subitizing.
Valentine’s Subitizing and Graphing Game is always a crowd pleaser. It targets subitizing, but also include skill work in number identification, counting, and graphing.
Tips for Teaching Subitizing
Making teaching kindergarten subitizing and preschool number sense easier with these helpful tips.
- Start with small numbers, nothing higher than five or six.
- Use subitizing cards and practice them for a few minutes daily. Just add it to your morning preschool routine.
- Play math games that require traditional dice. Add dice games to your preschool math centers.
- Practice subitizing on five frames and ten frames.
- Use a variety of materials including dot cards, playing cards, tally marks, dice, five and ten frames, etc.
- Use number talks.
Fun Videos for Teaching Preschoolers to Subitize
Sometimes, adding video to our number sense lesson plans is a fun way to keep preschoolers engaged. Here are some of our favorite YouTube videos that teach preschoolers about quantification.
I’m Sarah, an educator turned stay-at-home-mama of five! I’m the owner and creator of Stay At Home Educator, a website about intentional teaching and purposeful learning in the early childhood years. I’ve taught a range of levels, from preschool to college and a little bit of everything in between. Right now my focus is teaching my children and running a preschool from my home. Credentials include: Bachelors in Art, Masters in Curriculum and Instruction. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.