content
stringlengths 275
370k
|
---|
Dthief writes: Splitting water is a two-step process, and in a new study, researchers have performed one of these steps (reduction) with 100% efficiency. The results shatter the previous record of 60% for hydrogen production with visible light, and emphasize that future research should focus on the other step (oxidation) in order to realize practical overall water splitting. The main application of splitting water into its components of oxygen and hydrogen is that the hydrogen can then be used to deliver energy to fuel cells for powering vehicles and electronic devices. The process involves exposing the water to a mass of platinum-tipped nanorods, with visible light driving the reaction. The 100% efficiency refers to the photon-to-hydrogen conversion efficiency, and it means that virtually all of the photons that reach the photocatalyst generate an electron, and every two electrons produce one H2 molecule. At 100% yield, the half-reaction produces about 100 H2 molecules per second (or one every 10 milliseconds) on each nanorod, and a typical sample contains about 600 trillion nanorods.
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's now on IFTTT. Check it out! Check out the new SourceForge HTML5 Internet speed test! × |
In zoology, the epidermis is an epithelium (sheet of cells) that covers the body of an eumetazoan (animal more complex than a sponge). Eumetazoa have a cavity lined with a similar epithelium, the gastrodermis, which forms a boundary with the epidermis at the mouth.
Sponges have no epithelium, and therefore no epidermis or gastrodermis. The epidermis of a more complex invertebrate is just one layer deep, and may be protected by a non-cellular cuticle. The epidermis of a higher vertebrate has many layers, and the outer layers are reinforced with keratin and then die. |
What are Spring Allergies?
Pollen is typically in full swing during the spring season. For this reason, the primary cause of spring allergies is pollen. When you're allergic to pollen, your body overreacts to pollen entering your respiratory system and leads to uncomfortable symptoms.
Causes of Spring Allergies
The most common spring allergen is, you guessed it, pollen. Pollen is a fine powdery substance that is produced by plants during their reproductive cycle. The male part of a flower or cone produces the microscopic pollen that is then transported to the female ovule normally by the wind, insects and other animals, or simply gravity.
The type and source of pollen depends on your location. However, pollen is also known to travel up to hundreds of miles by hitching a free ride on the wind. The most common allergy causing pollen sources in the spring include:
If you have allergies, your body will mistakenly recognize the pollen as harmful and release antibodies in attempt to attack it. It releases histamines into your bloodstream that can cause the common allergy symptoms of runny nose and sneezing.
Pollen-related allergies are typically higher on windy days as the pollen is stirred up in the air. Rainy days will usually dampen the pollen, causing it to stick to the ground and other surfaces, reducing the possibility of it entering your respiratory system.
Symptoms and Diagnosis of Spring Allergies
Common symptoms of springtime allergies can include the following:
- Runny, stuffy, or itchy nose
- Watery or itchy eyes
- Coughing or sneezing
- Dark circles under your eyes
- Itchy ears or mouth
Visit your ENT specialist if you experience any allergy symptoms for more than one week. Your ENT specialist will typically analyze your symptoms, allergy triggers, and allergy history. They may also perform a test that exposes your skin to different allergens to figure out which substance is causing your allergies. A blood test may also be performed for further analysis.
Treatments for Spring Allergies
Your ENT specialist will recommend treatment based on the specific allergen, the type of reaction you have, and the severity of that reaction. Common treatments for springtime allergies usually include:
- Decongestants or nasal sprays
- Antihistamines or antihistamine/decongestant combinations
- Steroid nasal sprays
- Eye drops |
How Moon was Formed
Keywords:Moon, Accretion, Fission, Capture
Today's article is about the details discussion about out how moon was formed. There are three to four theories trying to explain the process but now the Generally Accepted theory is giant impact theory. Moon is a closest celestial body items to see it nearly every night. It's crazy how debate was going on for years regarding the correct theory to prove the formation a for moon. It was
quite tricky. And also, the size of the moon is large for a satellite. Let's see all the theories one by one.
Natural history museum
Image 1:- BBC |
Science for Sustainable Future
Creating knowledge and understanding through science equips us to find solutions to today’s acute economic, social and environmental challenges and to achieving sustainable development and greener societies. As no one country can achieve sustainable development alone, international scientific cooperation contributes, not only to scientific knowledge but also to building peace.
UNESCO works to assist countries to invest in science, technology and innovation (STI), to develop national science policies, to reform their science systems and to build capacity to monitor and evaluate performance through STI indicators and statistics taking into account the broad range of country-specific contexts.
Science policies are not enough. Science and engineering education at all levels and research capacity need to be built to allow countries to develop their own solutions to their specific problems and to play their part in the international scientific and technological arena.
Linking science to society, public understanding of science and the participation of citizens in science are essential to creating societies where people have the necessary knowledge to make professional, personal and political choices, and to participate in the stimulating world of discovery. Indigenous knowledge systems developed with long and close interaction with nature, complement knowledge systems based on modern science.
Science and technology empower societies and citizens but also involve ethical choices. UNESCO works with its member States to foster informed decisions about the use of science and technology, in particular in the field of bioethics.
Water is fundamental for life and ensuring water security for communities worldwide is essential to peace and sustainable development. The scientific understanding of the water cycle, the distribution and characteristics of surface and groundwater, of urban water all contribute to the wise management of freshwater for a healthy environment and to respond to human needs.
Scientific knowledge of the Earth’s history and mineral resources, knowledge of ecosystems and biodiversity, and the interaction of humans with ecosystems are important to help us understand how to manage our planet for a peaceful and sustainable future.
Read more on what Science could do for a better and Sustainable future on the UNESCO Website |
What do fish dorsal fins do?
The dorsal fins increase the lateral surface of the body during swimming, and thereby provide stability but at the expense of increasing drag (see also BUOYANCY, LOCOMOTION, AND MOVEMENT IN FISHES | Maneuverability).
Which is dorsal fin?
Fins: Help a fish move. The top fins are called dorsal fins. If there are two dorsal fins, the one nearest the head is called the first dorsal fin and the one behind it is the second dorsal fin. The belly or lower part of the fish is the ventral region.
What is a fish dorsal?
Dorsal fins are located on the back or on the top of the fish, and aid the fish in sharp turns or stops, and assist the fish in rolling.
What is the purpose of dorsal?
Functions. The main purpose of the dorsal fin is to stabilize the animal against rolling and to assist in sudden turns. Some species have further adapted their dorsal fins to other uses.
Do all fish have dorsal fins?
All normal fish have a dorsal fin. This fin functions to provide stability in the water and to prevent rolling. Some strains of goldfish have been developed in which one or more fins are absent or deformed.
Do fish have 2 dorsal fins?
Bony fishes have different kinds of fins for different purposes. on the fish’s back. Some fish may have only one dorsal fin while others may have two or even three. In many bony fishes the dorsal fin has Stout spines in the front to help give the fin support.
Why is it called dorsal fin?
A dorsal fin is a fin located on the back of most marine and freshwater vertebrates within various taxa of the animal kingdom.
What are the fins on fish called?
Most fish have a pair of fins on their ventral side (belly), called the pectoral fins. These are often used for steering, quickly changing direction and braking. The fins that are observed on the dorsal side (top) of the fish are called the dorsal fins. The caudal and the anal fins are located on the ventral side.
Why do sharks expose their dorsal fins?
Sharks can be lured to the surface with floating bait and, in investigating such hand-outs, sometimes their dorsal fins break the surface of the water. Sometimes sharks enter water so shallow that they can barely swim, and — as a result — their dorsal fins sometimes poke through the surface.
What do ventral fins do?
The ventral fin and anal fin are located on the bottom or belly of fish and help with steering as well as balance. The tail fin, also called the caudal fin, helps propels fish forward. Nares All fish can smell.
Why do sharks knife the water?
Perhaps the most parsimonious explanation for increased knifing post-dawn is that blue sharks are feeding on increased prey densities at the surface around dawn, or are taking advantage of the changing light conditions to surprise attack prey silhouetted at the surface.
Which fin is between dorsal and caudal fin?
Adipose Fin They are soft fins and located between the dorsal and caudal fins, usually very near to the caudal fin. |
Imperial decline and collapse during the period of 600 BCE to 600 CE was caused by the inability to efficiently collect taxes in their empire, over-expansion, and a government’s inability to assert their power over a large group of people. One of the major causes of imperial decline and collapse was the inability for an empire to collect enough taxes. For example, the Mongol Empire. The Mongol Empire was known for their strong military pursuit for expansion, as their leader, Kublai Khan.
But, the Mongol’s government could not collect enough taxes to pay for their massive army. This lead to the inability for them to reach their main goal, unite India, and resulted in their fall in the 6th Century. This was not good because they failed to maintain their goal. In the Roman Empire, taxes failed to be effective because of the Catholic church’s refusal to pay. Taxes were also so extreme, that the common person was not able to meet the given standards, and lead many individuals to starvation and bankruptcy.
In extreme cases, if tax collectors could not collect the money from citizens, they could face a death sentence. This extreme taxing lead to many citizens of the Roman Empire to flee and find new homes with barbarians. The barbarians were the ones who caused the final fall of the Roman Empire years later. Which is ironic since many of the people of the Roman Empire had joined them.Another important factor of imperial decline and collapse was the over-expansion of empires. Although expansion might seem highly beneficial, it actually lead to the fall of many empire’s reign. This is because as an empire continued to gain more land, they eventually lost control of their fast-growing territory because it was difficult for them to maintain power in several different locations at once.
Examples of this occurring during the period of 600 BCE to 600 CE include the Roman Empire and Han China. During the period of expansion in the Roman Empire, Roman leaders extended their rule over areas as far as North Africa and the Middle East. Although the Roman Empire successfully conquered these areas, the ideologies and religions that were followed in these countries were resoundingly different than those of Christian ideologies. In Han China, a close replica of the Roman Empire expansion occurred. It came to rule over groups of peoples who were not indigenously Chinese, which was the cause of a massive cultural wall. Lastly, an important factor was the fact that caused imperial decline and collapse was the government’s inability to control large populations.
This ties to the previous reason of imperial decline, which was that over-expansion caused collapse in empires. When empires expanded their territories, areas of power moved farther and farther away from the central government. For example, the Han dynasty was incapable of asserting their power over their large amount of territory because of revolts by the common people against taxes. Therefore, the central government had to rely on local leaders to take care of individual areas, which lead these individuals to gain more power within their states. This lead to the ultimate decline of the Han because they were unable to control the now powerful leaders, and one powerful leader ended up overthrowing the Han emperor, which ended the Chan Dynasty.
The reason why the government’s inability to control large populations ultimately caused their decline is because they had to deal with conflicts from the outside and within. There are many reasons why imperial decline and collapse occurred in 600 BCE to 600 CE, but a few of the most significant reasons are that some empires were unable to collect the necessary tax, over-expansion of territory, and a government’s inability to assert their power over a large population. |
(Natural News) Researchers from the University of Southampton have found ancient evidence suggesting that carbon dioxide levels in the atmosphere affected climate conditions approximately a hundred million years ago, but these are not the results that modern day climatologists want to hear. Modern climate change studies desperately want to correlate rising carbon dioxide levels with “climate change.” However, it was a lack of carbon dioxide in the atmosphere that led to sweeping temperature changes about a million years ago.
The international research team used an “Earth system” model along with geochemical measurements to pinpoint changes in continental ice sheets. This narrowed in on a timeframe when the Earth experienced extreme dips in atmospheric carbon dioxide levels. These drops in CO2 coincided with glacial intervals that brought about extremely cold climatic conditions. The conditions lasted around 400,000 years in what is known as the Mid-Pleistocene Transition (MPT) period.
The researchers came across the findings when they discovered abnormal abruptions in the Earth’s Milkovitch Cycles which cycle naturally every 40,000 years. During naturally-occurring Milkovitch Cycles, ice ages are part of a normal cycle caused by regular changes in the way the Earth orbits the sun. These natural cycles are also influenced by the way Earth spins on its axis in relation to the gravitational pull of other planets. Normally these cycles are predictable. The celestial changes cause climate to cycle from frigid glacial intervals where continental ice covers most of North America and Europe, to warm interglacial climates that free up ice in Europe and North America.
The disruption in this natural cycle occurred about a million years ago, altering the Milkovitch Cycle to a pattern of freeze and thaw over a longer time period of 100,000 years. This disruption was observed during a time when carbon dioxide levels were at their lowest. Dr. Tom Chalk of the University of Southampton explained that the Antarctic ice cores showed changes in atmospheric CO2 during this disruption in climate.
“CO2 was low when it was cold during the glacials and it was higher during the warm interglacials – in this way it acted as a key amplifier of the relatively minor climate forcing from the orbital cycles.”
Dr. Chalk noted that the ice core records are only measurable up to 800,000 years ago. In order to study carbon dioxide levels during the transition periods, the team had to devise a technique that examined the boron isotopic composition of the shells of ancient marine fossils. Their best bet was to study tiny marine plankton called “foraminifera.” These plankton once lived near the sea surface and harbored the chemical makeup of their environmental conditions when they swam the seas over a million years ago.
Professor Gavin Foster explained that the research team was able to use boron isotope measurements to obtain variable measurements in atmospheric CO2 from up to 1.1 million years ago. He explained that there were two main differences:
“[F]irstly, during the glacials before the MPT, CO2 did not drop as low as it did in the ice core record after the MPT, remaining about 20-40 parts per million (ppm) higher. Secondly, the climate system was also more sensitive to changing CO2 after the MPT than before.”
Looking further, NERC Independent Research Fellow Mathis Hain used a biogeochemical model to determine why glacial aged CO2 declined by 20-40 ppm. Their models determined that the lack of CO2 during the MPT coincided with a drop-off in dust to the Southern Ocean. During the normal glacial periods in the Milkovitch cycle, higher concentrations of dust brought necessary levels of iron to the Southern Ocean, encouraging the growth of phytoplankton. During the time-frame when there was less CO2 in the atmosphere, there wasn’t enough iron or phytoplankton and this locked more CO2 away in the deep ocean.
In other words, climate change during this era was caused by a complex series of factors including ocean currents, iron content of dust returning to the Southern Ocean, the subsequent loss of phytoplankton growth, the locking away of CO2 in the ocean bottoms, and the lack of CO2 returning to Earth’s atmosphere. The researchers commented that the less dusty climate conditions after this altered MPT could be caused by ice sheet formation and atmospheric circulation, too.
The complexity of the Earth’s climate, its natural cycles, and its relationship with celestial cycles should not be trivialized just to advance a sensationalist climate change agenda that blames human activity for the demise of the planet. For the most part, the Earth’s climate is beyond man’s control. (Related: Major climate change study just confirmed the climate was changing dramatically in the 1800s, long before the invention of the combustion engine.) |
Dreams have fascinated people since the beginning of time. They were first seen as messages sent by the Gods. Advances in medicine and the emergence of scientific thinking have shown that dreams are perfectly identifiable and explainable phenomena. The progress of science has allowed the creation of machines to measure neuronal interactions with precision during sleep. Scientists have conducted numerous studies that have shown that sleep and dreams are essential to the proper development of our brain. They play an essential role in the regulation of our emotions. But then how is a dream made? How do we dream? How does the brain produce a dream?
Dreams: the scientific explanation
To understand how a dream is made, you have to know that a dream is made by our brain when we sleep. Modern medicine has allowed scientists to approach the question of dreams in a more factual way. New medical imaging techniques, such as the electroencephalogram (EEG), make it possible to record brain activity during sleep. This device allows us to see the electrical activity between the neurons of the brain.
Numerous studies have been conducted with hundreds of patients to study the different phases of their sleep. From these experiments, scientists have been able to demonstrate that there are 2 phases in a sleep cycle in all individuals: slow-wave sleep or deep sleep and REM sleep.
The sleep cycle
A sleep cycle consists of a deep sleep phase and a REM sleep phase. Each night, we have about 4 to 6 sleep cycles. Each sleep cycle lasts about 90 minutes.
Slow wave sleep or deep sleep : the memory
The slow wave sleep phase also called deep sleep, corresponds to a slowing down of brain activity. The electrical waves emitted by the brain are long and of low frequency. This slowing down is progressive. The neurons interact slowly with each other but they do not stop interacting. The brain is not completely inactive and certain areas of the brain are activated at times.
Scientists have been able to demonstrate that it is during the deep sleep phase that we record our memories. It is a learning phase where we consolidate what we have learned during the day. This is when memory is created.
In the first phase, the brain gradually “slows down”, the electroencephalographic trace revealing ample and low-frequency waves. This is a sign that the neurons are sending fewer nerve impulses and that, overall, the brain is less active – even though a number of things happen during this phase, including the consolidation of memories.
REM sleep : the creativity
REM sleep is also called rapid eye movement (REM) sleep. During REM sleep the eyes make very rapid and jerky movements and roll in orbit. During this phase of sleep, only the eyes move, the rest of the body is immobile as if paralyzed.
On an electroencephalogram, the waves emitted by the brain are much faster than during slow-wave sleep (about 10 times faster). The neuronal activity is the same as for an awake brain.
During this phase, the brain forms elaborate images and visual constructions. It does not need any external stimuli. Moreover, during this phase of REM sleep, the primary senses do not function and do not send any information to the brain. The brain no longer makes the difference between dream and reality. There is no more notion of coherence because the prefrontal lobe is at rest.
What happens when we dream?
To understand how a dream is made, we must know how the brain produces the dream. When we dream, during our sleep, the brain is not at rest, quite the contrary. It functions almost normally. The neurons exchange electrical waves as they do in the waking state. During the slow wave sleep phase, the brain is slowing down and recording memories. During REM sleep, the brain is in full swing and activates several areas of our brain depending on what we experience in our dreams. The dream brain nucleus is located at the back of the brain in the posterior area. It is this area of the brain that creates the dream. It is also called the “hot zone”.
Why do we dream?
In addition to understanding and knowing how we dream, we must know why we dream? What is the purpose of a dream?
Dreams are a means of expression for the brain. It is a moment that allows the brain to “relax” and to assimilate the information acquired during the day. Dreaming is an essential element that participates in the proper functioning of psychic activity. It is a moment of decompression for the brain which continues to function but without using the frontal lobe and the sensory stimuli transmitted by the body in the waking state.
To know more about the reasons why we dream you may consult our article on the subject.
What exactly is a dream?
A dream is a psychic creation that occurs during sleep. It can be analysed in a philosophical way. It can also be used in psychoanalysis where it is seen as the means of expression of our subconscious.
The interpretation of dreams is a science of its own which allows us to find the hidden meaning of our dreams. The scenarios set up by our unconscious can be surprising, fairy tale, dreamlike, fantastic and sometimes unreal. The precise analysis of the dream allows us to decipher the messages of our unconscious. The study of dreams can help solve everyday problems.
There are several types of dreams:
– Lucid dreams: these are dreams of full consciousness that we remember when we wake up.
– Current events dreams: these are dreams that deal with what is happening in your life at the moment.
– Recurring dreams: these are dreams that are repeated and that you will have several times in your life.
– Creative dreams: dreams of pure creation, they come out of your imagination and do not relate to reality.
– Premonitory dreams: these are dreams that are supposed to predict the future (they are very rare).
We explain in detail in this article what a dream is.
We have explained in this article with as much precision as possible how a dream is made. Science evolves every day, and new answers are brought to give a more precise explanation of this phenomenon. The dream is not just dreamlike, it has a real and scientific existence. The study of dreams, the interpretation of dreams, and the search for the meaning of dreams is a good way to learn to understand oneself and to solve one’s problems. This science is very much used in psychotherapy and psychoanalysis to help patients move forward and solve their problems. |
From past the orbit of Mars, the solar system looks a bit like the above. Earth is just a speck, not even visible here (at the end of the blue line). The sun is much larger than the earth, yet still seems small and far away. Its light shines equally in all directions.
Can we actually verify this? Can we prove the sun is not actually a spotlight shining on only half of a flat earth, like some people claim to believe?
Yes, we can do this in various ways but I like practical science experiments that are (relatively) easy to perform, so I'm going to focus on a few here, things that you can do yourself. We shall demonstrate:
- The size of the sun is the sky remains almost exactly the same all day long, meaning the relative change in distance is very small, so it's very far away.
- The Sun looks circular from every direction, so the Sun is a sphere
- Over a single day sunspots remain in about the same position as the sun moves from horizon to horizon, so the Sun is very far away.
- Over the 26 days sunspots move over the surface of the sun in a spherical path, so the sun is a sphere.
Now the sun is incredibly bright so you can't just take a normal photo. You need to use a filter in front of the lens so you can block out most of the light. Still though you need a good zoom to be able to measure the size well (and to see sunspots, more on which later). But you can certainly do all of this with a camera if you have the right equipment.
A practical alternative is to use something most people will have access to - a pair of binoculars (or a telescope). Now it goes without saying you should never look at the sun though binoculars. Permanent eye damage could result. So instead of projecting an image of the sun permanently onto your retina you can instead project it onto a piece of paper.
This is a 2x4 piece of wood, about three feet long, attached to a tripod with cable ties (you don't need the tripod, it's just convenient). Each end of the 2x4 has some cardboard stapled to it. The top piece has a hole cut in it, aligned with one side of the binoculars, which are firmly fixed to the 2x4 with tape (and cable ties). You point it at the sun, and an image forms on a piece of white paper on the bottom card. You focus this image using the binoculars' focus knob.
This gives you a very safe and pretty high quality image of the sun. You can measure how big it is, and see if it changes through the day (it doesn't). You can also see if it implausibly changes shape (it remains circular)
For measuring the size, you can use some squared paper, as it gives a nice visual record.
Because the distance from the binoculars to the paper is fixed, then any change in the size of the sun would be reflected in the size of the projected image. No such changes were observed, and the sun stayed the same size and shape size from horizon to horizon.
We normally think of the sun as just a very bright light, but if you look closely you can see some detail in the surface of the sun - sunspots. We can observe how these move through the day, and see if they indicate the Sun is a sphere.
Now for reference, I recommend you get a recent image of the sun from the SDO, here's the one corresponding with today's images.
Just a couple of small spots, but enough to work with. I've got the SDO app on my phone, and used that as a reference. You can just make out the two spots in each image.
Now at first glance it might look like the spots are in the wrong place, but because we are projecting wth the binoculars rather than looking through them the image is inverted, it's upside down. Also the orientation of the SDO image isn't going to match the orientation of this image from Earth, as your latitude and the time of day will alter your viewing angle. So if we flip my image vertically and rotate it a bit you'll see that it lines up exactly. Drag the slider below to see this.
So what happens to sunspots over the course of the day? Very little. They rotate based on the angle of the viewer, like the moon and the stars do (this is known as field rotation - you are rotating, so your field of view seems to rotate). They also move slightly to the right due to the rotation of the Sun. But basically the same side of the Sun is a facing you all day long, and since it's a sphere, and the same side is facing everywhere on the Earth, then that means it's a very large and far away sphere. Many times further away than the size of the Earth.
One thing we should be able to do after taking a few observations is to trust the SDO images match our images. This is what the sun looks like from everywhere in the world. It has been verified countless times because taking photos of sunspots is something people do hundreds of times a day, and something you can do yourself. But if you really want to, you can verify the following just by taking lots of photos for a couple of days.
Here's two days of photos of the sun, made into a time-lapse movie. It the 48 hours prior to the above SDO image which we verified:
You'll notice that the sun is rotating from west to east, to the right. People often don't realize that the sun rotates. Viewed from Earth the Sun completes a full revolution every 26.24 days. So this two day period is about 1/13th of a full rotation (although with no fixed surface this does not have quite the same meaning as the rotation of a rock planet).
What is significant here is that we can see the rotation in the SDO images (and verify it with our own images). We can also see that this is the rotation of a sphere by observing sunspots (particularly larger ones in groups) and seeing that they move exactly as we would expect on a rotating sphere. You can see this somewhat with the sunspot on the right as it approaches the edge. However to really get a sense of the Sun's rotation (and roundness) have a look at a longer period, with more sunspots.
Now you might argue this is "just an animation", but it's just a series of SDO images, images that are quite easily independently verified as I did above. This is verifiably what the sun looks like from everywhere on the globe, and verifiably how sunspots move around it.
One more thing we can do if we take photos of the sun with the same settings is compare them, and see they don't change though the day.
When there's some nice visible sunspots we can see how much the sun's image appears to rotate from my perspective (because of the rotation of the earth). Again, note the absolute lack of any change in size.
Hence we have demonstrated that the sun is a large distant rotating sphere, and by extension we have demonstrated yet again that the Earth is not flat. |
If positioned exactly on the top, the marble will stay in place, but the slightest push will make it move further and still further from equilibrium, until it falls off. By contrast, the equilibrium at L4 or L5 are like that of a marble at the bottom of a spherical bowl: given a slight push, it rolls back again. Thus the spacecraft at L4 or L5 do not tend to wander off, unlike those at L1 and L2 which require small onboard rockets to nudge them back into place from time to time. |
Here we will show that L4 and L5 of the Earth-Moon system are positions of equilibrium in a frame of reference rotating with the Moon, assuming that the Moon's orbit is circular. Non-circular orbits and the question of stability are beyond the scope of this discussion.
Tools of the Calculation
We will need Newton's law of gravitation and the fact that the center of the Moon's orbit is only approximately the center of the Earth. The actual center of the orbit is the center of mass (or "center of gravity") of the Earth-Moon system (see end of section 11).
As shown in section 25, if m is the mass of the Moon and M that of the Earth, the center of mass is the point that divides the Earth-Moon line by a ratio m:M. Say A is the center of the Earth, B the one of the Moon, and c is the distance between the two (drawing). Then if D is the center of mass,
AD = cm/(m+M)
DB = cM/(m+M)
and it is easy to check that the sum of these distances is c and their ratio is m/M. An alternative form of DB (which will be useful) is obtained by dividing numerator and denominator by M:
DB = c/(1 + m/M)
From trigonometry we will need the "law of sines". Suppose we are given a triangle ABC of arbitrary size and shape (drawing). The angles at the three corners will be named A, B and C as well, while the lengths of the sides facing them are denoted a, b and c. Then the law of sines says
sinA/a = sinB/b = sinC/c
Let us prove it for the angles A and B, by drawing from the third corner C a line perpendicular to the opposite side of the triangle. Let h be the length of that line. Then
sinA = h/ b b sinA = h b sinA = a sinB
sinB = h/ a a sinB = h
and dividing both sides by ab gives the required result. To prove that the angle C also fulfils the condition, we repeat the calculation with a perpendicular line drawn from A or from B.
We also need a trigonometrical identity for the sine of the sum of two angles. If those angles are denoted by the Greek letters α and β
sin(α + β) = sinα cosβ+ cosα sinβ
The proof of this identity is given separately.
Finally, we will need the resolution of vectors (see sec. 14). Suppose a force F acts on an object at some point C, making an angle αwith the direction of a given line, marked here with R (drawing).
Suppose also that we need to resolve F into components parallel and perpendicular to R. In the triangle CPQ, if CP represents the force F, then CQ and QP represent its parallel and perpendicular components. Then since
sin α = QC/CP
cos α = QP/CP
parallel force = CQ = F sinα
perpen. force = QP = F cosα
Conditions of Equilibrium
To the diagram drawn earlier to illustrate the center of mass of the Earth-Moon system, we add a spacecraft at some point C, with distances b from the Earth, a from the Moon and R from the center of mass D. As in the derivation of the law of sines, we name (A,B,C) the angles at the corner points marked with those letter, and (a,b,c) will be the lengths of the sides facing the corners (A,B,C).
We furthermore label as (α,β)
the two parts into which R divides the angle C.
Check all these out before continuing.
The question to be answered is: Under what conditions does the satellite at C maintain a fixed position relative to the Earth and Moon?
The calculation is best handled in the frame rotating with the Moon. In that frame, if a satellite at point C is in equilibrium, it will always keep the same distance from the Moon and from Earth. The center of rotation is the point D--even the Earth rotates around it--and if the spacecraft at C is in equilibrium, all three bodies have the same orbital period T. If C is motionless in the rotating frame, there exists no Coriolis force (it only acts on objects moving in that frame), but the spacecraft will sense a centrifugal force, as will the Moon and the Earth.
Let us collect equations
--the ones which the distances and angles must obey.
(1) Note first that the radius of rotation R of the spacecraft will
usually differ from the one of the Moon, which is c/(1 + m/M)
Denoting the rotational velocity of the Moon by V and that of the spacecraft by v, since distance = velocity x time
2π R = vT 2π c/(1 + m/M) = VT
2π/T = v/R
2π/T = (V/ c)(1 + m/M)
The two expressions equal to 2π/ T must also be equal to each other, hence
(1) v/R = (V/c)(1 + m/M)
This merely expresses the well-known observation that if two objects share a rotation, the one more distant from the axis rotates faster, and their velocities are proportional to their distances from the axis.
(2) The centrifugal force on the Moon is
mV2/[c/(1 + m/M)] = m(V2/c)(1 + m/M)
and it is balanced by the pull of the Earth
where G is Newton's constant of gravitation, first measured by Henry Cavendish. In a circular orbit. the two must be equal, balancing each other (as in the calculation in section 20):
GmM/c2 = m(V2/c)(1 + m/M)
Dividing both sides by (m/c) gives our second equation:
(2) GM/c = V2 (1 + m/M)
(3) Let m' be the mass of the spacecraft. The centrifugal force on it is
and that must be balanced by the attracting forces Fe of the Earth and Fm of the Moon. However, only the components of those forces along the line R are effective in opposing the centrifugal force. Hence
m'v2/R = Fm cosβ+ Fe cosα
Now by Newton's theory of gravitation
Fm = G m'm/a2
Fe = G m'M/b2
Inserting these in the upper equation and dividing both sides by m' gives the 3rd equation:
(3) v2/R = (Gm/a2) cosβ ;+ (GM/b2) cosα
(4) Finally, the forces pulling the spacecraft in directions perpendicular to R must cancel. Otherwise, the spacecraft would be pulled by the stronger of the two and would not stay at C, that is, would no longer be in equilibrium. That requires
Fm sinβ = Fe sinα
Substituting and dividing both sides by Gm' leaves
(4) (m/a2) sinβ = (M/b2) sinα
Collecting all equations once more:
v/R = (V/c)(1 + m/M)
- GM/c = V2 (1 + m/M)
- v2/R = (Gm/a2) cosβ + (GM/b2) cosα
- (m/a2) sinβ = (M/b2) sinα
The quantities appearing here are of 3 types.
Let us eliminate the velocities, so that the conditions we are left with are purely geometrical, involving only distances and angles.
- ---Some are known constants--G, m and M. They have given values and we do not expect them to change.
- --Some are distances--r, a, b and c--having to do with the positions of the Earth, Moon and spacecraft in space. The angles (α,β) depend on those distances too, but we won't need the exact relationships for this.
- --And some are velocities, namely v and V.
We already carried out an elimination earlier. We had two equations which involved the orbital period T, each was used to express 2π/T, and by setting those two expressions equal to each other, we obtained a single expression which did not contain T (we always "give up one equation" in an elimination process--start with two, end with one).
The plan then is as follows. We will eliminate V between (1) and (2), leaving an equation involving only v. Then we will eliminate v between it and (3), winding up with an equation not involving velocities--plus (4), which also contains neither v nor V.
From (1), squaring both sides
v2/R2 = (V2/c2) (1 + m/M) 2
Multiply both sides by c2and divide them by (1 + m/M)
v2 (c2/R2) / [1 + m/ M] = V2 (1 + m/M)
But by (2) GM/c = V2 (1 + m/M)
(5) v2 (c2/R2) /[1 + m/M] = GM/c
and V has just been eliminated. Now multiply both sides by (1 + m/M), divide them by c2 and multiply them by R
v2/R = (GM/ c3)R (1 + m/M)
But by (3)
v2/R = (Gm/a2) cosβ + (GM/b2) cosα
Therefore (moving a factor of 1/c to R)
(GM/c2) (R/c) (1 + m/M) = (Gm/a2) cosβ + (GM/b2) cosα
Dividing everything by GM gives one of the equations we are left with, while the other one is (4):
(6) (1/c2) (R/c) (1 + m/M) = (1/a2)(m/M) cosβ+ (1/b2) cos α
(4) (m/a2) sinβ = (M/b2) sinα
C = α + β
Let us go back to the last drawing, reproduced here again for convenience. We will denote by (A,B,C) not only the corners of the triangle but also the angles formed there. obviously
Let R1 = BD be the distance from the Moon to the center of gravity (or center of mass) point D, which stays at rest in the Earth-Moon system (see section #25); it is just a little less than the Earth-Moon distance AB = c. As noted in the drawing
R1 = c [M / (M+m)] = c / [1 + (m/M)]
So (6) becomes
(1/ c2) (R / R1) = (1/a2)
(m/M) cos β + (1/b2) cos α
Substituting from (4)
(m / M) = (a2 sinα / b2 sinβ)
and cancelling a factor a2 along the way
(1/ c2) (R / R1) =
(1/b2) [(sinα cosβ / sinβ) + cosα]
= (1/b2sinβ) [sinα cosβ + cosα sinβ]
= (1/b2sinβ) sin(α + β) =
(1/b2sinβ) sin C (9)
By the law of sines in the triangle BCD
= sin B / R
sinβ (R/R1) = sin B
sin B/c2 = sinC/b2
sin B/ sin C = c2/b2
But from the law of sines in the triangle ABC
sin B/ sin C = b / c
b3 = c3
b = c
An important fact of this calculation is that neither m nor M appear in the final result. We can therefore revise our notation, making M the mass of the Moon and m the mass of the Earth. (The point D in the diagram would be shifted, but it is inaccurate anyway, actually located below the Earth's surface). In the revised scheme, b stands for the distance from the Moon to the spacecraft, originally designated "a".
The calculation now shows that the spacecraft-Moon distance also equals the Earth-Moon distance c. It follows that ABC is an equilateral triangle.
(Thanks to a French message by Penn Gwenn with a simpler version of the equations from (7) on, and to Dr. Guy Batteur for communicating it to me -- DPS)
As already noted, because L4 and L5 are stable points of equilibrium, they have been proposed for sites of large self-contained "space colonies", an idea developed and advocated by the late Gerald O'Neill. In 1978 Bill Higgins and Barry Gehm even wrote for would-be colonists "The L5 Song " to the tune of "Home on the Range. " Here is its beginning:
Home on Lagrange
Oh give me a locus
FYI, the "three body problem" is the solution of the motion of three bodies under their mutual attraction. It is famous for having stymied astronomers for many years, and the king of Sweden even offered a prize to whoever solved it: the prize was claimed by the French mathematician Henri Poincare, who proved that in general it was insoluble--that no explicit formula existed that predicted the motion for the indefinite future. In today's terminology one would say that the general three-body motion has chaotic properties. Even the general "restricted three body problem" where one of the bodies is very small--e.g. Earth, Moon and spacecraft--is insoluble, although specific solutions exist, like the ones in which the spacecraft is positioned at one of the Lagrangian points.
Where the gravitons focus
Where the three-body problem is solved
Where the microwaves play
Down at 3 degrees K
And the cold virus never evolved
Home, home on Lagrange
Where the space debris always collects...
About space colonies at Lagrangian points:
- Gerald K. O'Neill, "The Colonization of Space", Physics Today September 1974, p. 32.
- Gerard K. O'Neill, "The High Frontier", William Morrow and Co., NY, 1977; Anchor Books (Doubleday) 1982.
About the L4 and L5 points and about asteroids locked into the neighborhoods of L4 and L5 of the Sun-Jupiter system: "When Trojans and Greeks Collide" by I. Vorobyov, Quantum, September-October 1999, p. 16-19. That article contains an alternative proof of the equilibrium of motion at L4 and L5, more general (no limitation on the masses) but using a rotating frame of reference and two-dimensional vectors. The calculation can be found in section (34-c) of this web site. |
by Allison Vincent
When you visit the CREW lands you’ll come across invasive plant species, and whether you’re aware of them or not, they’re there! Some invasive species are beautiful, like the caesar weed, and you might find yourself wondering why the land managers have it out for them. What could a few plants possibly do to impact the broader ecosystem?
Let’s begin with a few definitions, because invasive species are best understood by discussing the meaning of native and non-native and their interactions with humans. The scientific community agrees that native species wandered into an area naturally and long ago – some time during or after the mid 16th century at the time of European contact with the unexplored land across the Atlantic – either by wind, sea, birds, animals or other natural factors. As species expand or contract their native territory, they go through a process called “range change”. The native species then go about the process of adapting to the changed ecosystem, which in geographic terms was a feat considering that much of Florida was once the bottom of the ocean.
Non-native species, on the other hand, are introduced to a new environment either intentionally or by accident. The distinction in defining invasive takes non-native issues one step further because invasive species, in addition to being introduced by humans, often pose an environmental or economic threat and may cause harm to humans.
Still, what exactly is so negative about the impact of these invasive plant species, given they all make oxygen and absorb carbon dioxide? These tough questions are outlined by land management professionals who rely on current science to categorize the level of impact invasive species have on lands they manage, like the CREW lands. For instance, some invasive species, like the Melaleuca tree, will overtake wetlands and absorb an inordinate amount of water if not treated, which is exactly why they were brought to southwest Florida – to drain the swamps.
In turn, land managers use mechanical, chemical and biological control efforts to manage the spread of invasives, because without naturally occurring factors that limit their impact – like weather, diseases or insect pests – invasive species can disrupt the balance of the ecosystem they’ve supplanted, often out-competing and displacing the native species. The reduction in biodiversity can adversely impact wildlife and alter natural processes such as fire and water flow, all of which directly affect the human population which relies on those same resources.
Let’s talk more about the unintended impact that invasive species have on human populations, specifically in south Florida. Primarily, invasives threaten remaining wetland environments that provide a freshwater recharge of our drinking water sources in the underground aquifers. Native species have had time to adjust to the particular conditions of the Florida environment, so when the wetland composition goes from a natural state to a place overrun with counterproductive species, some of our basic needs – like water and safe shelter – are drastically affected.
Protecting these wetland areas provides habitat for wildlife that in turn generates billions of dollars a year in expenditures by wildlife enthusiasts, hunters and anglers. The financial benefits of preserving the complicated ecosystems of south Florida are well documented and worthwhile. Without the wetland environment to slow the flow of rainwater so that it can be absorbed into the ground and replenish the drinking water supply in the aquifers, Florida would not be able to sustain its current significant population, much less what we expect to see in the future.
The CREW Project began watershed preservation in the late 80s and the CREW lands will continue to be preserved in perpetuity. Several large scale projects, like the hydrologic restoration project completed at CREW Flint Pen Strand and the ongoing CREW Marsh trails restoration focused on carolina willow, provide visible examples of the land management process you can witness in person on the trails over time. The 60,000 acres of CREW preserve land for water retention, wildlife and all the other ways these important resources overlap. To protect this land and the water it stores for the next generation we all must partner to fund this preservation, to protect it, and to educate everyone about it. |
Most people develop tinnitus as a symptom of hearing loss. When you lose hearing, your brain undergoes changes in the way it processes sound frequencies. A hearing aid is a small electronic device that uses a microphone, amplifier, and speaker to increase the volume of external noises. This can mollify neuroplastic changes in the brain’s ability to process sound.
Tinnitus is the perception of sound when no actual external noise is present. While it is commonly referred to as “ringing in the ears,” tinnitus can manifest many different perceptions of sound, including buzzing, hissing, whistling, swooshing, and clicking. In some rare cases, tinnitus patients report hearing music. Tinnitus can be both an acute (temporary) condition or a chronic (ongoing) health malady.
Research regarding using cognitive behavioral therapy for tinnitus shows that tolerance to tinnitus can be facilitated by “reducing levels of autonomic nervous system arousal, changing the emotional meaning of the tinnitus, and reducing other stresses.” (6) It’s been found that there’s some overlap in anxiety and tinnitus due to an association between subcortical brain networks involved in hearing sounds, attention, distress and memory functions.
In many cases, tinnitus is caused by hyperactivity (or too much activity) in the brain’s auditory cortex. “When there’s damage or a loss of input in the ear [such as hearing loss, head trauma, or a blood vessel problem], the brain tries to turn up certain channels in order to compensate,” Dr. Kilgard explains. When the brain doesn’t get that tuning quite right, the result is tinnitus.
Often people bring in very long lists of medications that have been reported, once or twice, to be associated with tinnitus. This unfortunate behavior makes it very hard to care for these patients -- as it puts one into an impossible situation where the patient is in great distress but is also unwilling to attempt any treatment. Specialists who care for patients with ear disease, usually know very well which drugs are problems (such as those noted above), and which ones are nearly always safe.
Even with all of these associated conditions and causes, some people develop tinnitus for no obvious reason. Most of the time, tinnitus isn’t a sign of a serious health problem, although if it’s loud or doesn’t go away, it can cause fatigue, depression, anxiety, and problems with memory and concentration. For some, tinnitus can be a source of real mental and emotional anguish.
It’s the same mechanism that’s happening in people who feel a phantom limb sensation after losing a limb, explains Susan Shore, PhD, a professor of otolaryngology, molecular physiology, and biomedical engineering at the University of Michigan in Ann Arbor. With tinnitus the loss of hearing causes specific brain neurons to increase their activity as a way of compensating, she explains. “These neurons also synchronize their activity as they would if there were a sound there, but there is no external sound,” she adds.
Ocean waves are designed to create a soothing environment, like that of the serene ocean waves. Miracle-Ear hearing aids offer four different ocean wave signals to choose from so that you can find the one that you find to be the most relaxing. Ocean waves are an alternative to static noise and can be found to be a stress-free type of tinnitus treatment. Your hearing care specialist will work with you to find the signal that offers the most relief.
For some people, the jarring motion of brisk walking can produce what is called a seismic effect which causes movement in the small bones or contractions in the muscles of the middle ear space. You can experiment to find out if this is the cause by walking slowly and smoothly to see if the clicking is present. Then, try walking quickly and with a lot of motion to see if you hear the clicking. You can also test for the seismic effect by moving your head up and down quickly. |
Chapter 1.0 The fundamentals of sound control
1.1 What is sound?
WHAT IS SOUND?
When designing a shared space, it is important to construct an acoustic environment that serves its intended function while meeting the collective needs of its inhabitants. But before getting started, it’s important to understand some fundamentals — like, what is sound?
Sound is produced when something vibrates. The vibrating sound source sets particles in the air or other surrounding medium into vibrational motion. According to physics, these audible vibrations are transmitted as sound waves which consist of areas of both high and low air pressure.
When sound waves reach the human ear, they travel down the ear canal and vibrate the eardrums at an equal resonance. The bones of the inner ear convert the vibrations into nerve impulses, which are then carried to the brain for interpretation. The two most influential factors on how humans experience exposure to sound are frequency and sound pressure.
Sound travels as waves of compressed air. A single wavelength is calculated by measuring the distance between one crest and the next. The wavelength determines the frequency of the sound. A sound frequency, measured in Hertz (Hz) likewise represents the speed at which a sound vibrates. It’s this vibrational speed that determines the pitch of the sound. Sound that vibrates quickly has shorter wavelengths and a higher frequency, while sound vibrating more slowly has longer wavelengths and a lower frequency.
The generally accepted standard hearing range for humans is 20 to 20,000 Hz, and most human speech occurs at frequencies between 500 and 2000 Hz. Frequencies below 20 Hz are felt rather than heard. Low frequency sounds include bass notes, while high frequency sounds include bells and cymbals. When humans experience hearing loss, typically due to ageing, high frequency sounds especially become harder to hear.
”Acoustical design is about material quality and material position. Treatment should be selected first for the acoustics, and second for the aesthetics.”
— Dr Naglaa Sami Adbel Aziz Mahmoud
The other key aspect of sound is sound pressure. The sound pressure level is commonly measured in decibels (dB), which represent the effective pressure of a sound relative to a reference value. Most human speech occurs at around 60 dB. Regular and prolonged exposure to sounds above 85 dB is considered hazardous to human health and wellbeing. Decibels are expressed on a non-linear logarithmic scale. In other words, making the sound pressure level 10 times higher corresponds to an increase in 10 dB. Do not confuse sound pressure with loudness, which is a subjective measure of sound.
When sound becomes noise
The main difference between a sound and a noise is how the vibration is perceived by the individual. Whether the source is a piece of music, a conversation between coworkers or an active construction site outside the window—what one person considers to be tolerable may be considered disruptive or annoying to the next. |
Some Guidelines for using the Slave Trade pages
At first glance these documents can seem unreadable, and look very unfamiliar.
This pack includes transcripts of the handwritten documents, to provide some help at the beginning. Before 1800, the majority of documents are written very neatly - but not always using familiar letter shapes:For example a "long s" which looks very like an f is used in early printed as well as handwritten sources. Even after the "short s" we use today became common, many people still wrote double s as fs rather than ss.
The consistency of the handwriting helps in understanding a document. Once a shape is correctly identified it can be matched in other parts of the document - rather like cracking a code!
However, particularly after 1800, difficulty in reading a document can have as much to do with bad handwriting as unfamiliar letters.
Standard spelling is a relatively new idea, and some of the words found in older documents can look bizarre to modern eyes. If the word is read out loud, it is often transformed!
Some words have either changed their meaning altogether, or fallen out of use. However, large dictionaries contain many redundant words, and there are other helpful reference books in the library, such as guides to particular trades or dialects.
Understanding why the document was created and how it relates to other documents in the collection can transform your understanding of the information in the document.
The lists of the documents often have helpful introductions written by the archivist who listed them and will show you what else the collection contains.The staff on duty in the searchroom are happy to answer questions.You may even need to look at similar examples to help identify the background to the document. You may discover something new! |
A waveguide is a special form of transmission line consisting of a hollow, metal tube. The tube wall provides distributed inductance, while the empty space between the tube walls provide distributed capacitance: Figure below
Wave guides conduct microwave energy at lower loss than coaxial cables.
Waveguides are practical only for signals of extremely high frequency, where the wavelength approaches the cross-sectional dimensions of the waveguide. Below such frequencies, waveguides are useless as electrical transmission lines.
When functioning as transmission lines, though, waveguides are considerably simpler than two-conductor cables—especially coaxial cables—in their manufacture and maintenance. With only a single conductor (the waveguide’s “shell”), there are no concerns with proper conductor-to-conductor spacing, or of the consistency of the dielectric material, since the only dielectric in a waveguide is air. Moisture is not as severe a problem in waveguides as it is within coaxial cables, either, and so waveguides are often spared the necessity of gas “filling.”
Waveguides may be thought of as conduits for electromagnetic energy, the waveguide itself acting as nothing more than a “director” of the energy rather than as a signal conductor in the normal sense of the word. In a sense, all transmission lines function as conduits of electromagnetic energy when transporting pulses or high-frequency waves, directing the waves as the banks of a river direct a tidal wave. However, because waveguides are single-conductor elements, the propagation of electrical energy down a waveguide is of a very different nature than the propagation of electrical energy down a two-conductor transmission line.
All electromagnetic waves consist of electric and magnetic fields propagating in the same direction of travel, but perpendicular to each other. Along the length of a normal transmission line, both electric and magnetic fields are perpendicular (transverse) to the direction of wave travel. This is known as the principal mode, or TEM (Transverse Electric and Magnetic) mode. This mode of wave propagation can exist only where there are two conductors, and it is the dominant mode of wave propagation where the cross-sectional dimensions of the transmission line are small compared to the wavelength of the signal. (Figure below)
Twin lead transmission line propagation: TEM mode.
At microwave signal frequencies (between 100 MHz and 300 GHz), two-conductor transmission lines of any substantial length operating in standard TEM mode become impractical. Lines small enough in cross-sectional dimension to maintain TEM mode signal propagation for microwave signals tend to have low voltage ratings, and suffer from large, parasitic power losses due to conductor “skin” and dielectric effects. Fortunately, though, at these short wavelengths there exist other modes of propagation that are not as “lossy,” if a conductive tube is used rather than two parallel conductors. It is at these high frequencies that waveguides become practical.
When an electromagnetic wave propagates down a hollow tube, only one of the fields—either electric or magnetic—will actually be transverse to the wave’s direction of travel. The other field will “loop” longitudinally to the direction of travel, but still be perpendicular to the other field. Whichever field remains transverse to the direction of travel determines whether the wave propagates in TE mode (Transverse Electric) or TM (Transverse Magnetic) mode. (Figure below)
Waveguide (TE) transverse electric and (TM) transverse magnetic modes.
Many variations of each mode exist for a given waveguide, and a full discussion of this is subject well beyond the scope of this book.
Signals are typically introduced to and extracted from waveguides by means of small antenna-like coupling devices inserted into the waveguide. Sometimes these coupling elements take the form of a dipole, which is nothing more than two open-ended stub wires of appropriate length. Other times, the coupler is a single stub (a half-dipole, similar in principle to a “whip” antenna, 1/4λ in physical length), or a short loop of wire terminated on the inside surface of the waveguide: (Figure below)
Stub and loop coupling to waveguide.
In some cases, such as a class of vacuum tube devices called inductive output tubes (the so-called klystron tube falls into this category), a “cavity” formed of conductive material may intercept electromagnetic energy from a modulated beam of electrons, having no contact with the beam itself: (Figure below)
Klystron inductive output tube.
Just as transmission lines are able to function as resonant elements in a circuit, especially when terminated by a short-circuit or an open-circuit, a dead-ended waveguide may also resonate at particular frequencies. When used as such, the device is called a cavity resonator. Inductive output tubes use toroid-shaped cavity resonators to maximize the power transfer efficiency between the electron beam and the output cable.
A cavity’s resonant frequency may be altered by changing its physical dimensions. To this end, cavities with movable plates, screws, and other mechanical elements for tuning are manufactured to provide coarse resonant frequency adjustment.
If a resonant cavity is made open on one end, it functions as a unidirectional antenna. The following photograph shows a home-made waveguide formed from a tin can, used as an antenna for a 2.4 GHz signal in an “802.11b” computer communication network. The coupling element is a quarter-wave stub: nothing more than a piece of solid copper wire about 1-1/4 inches in length extending from the center of a coaxial cable connector penetrating the side of the can: (Figure below)
Can-tenna illustrates stub coupling to waveguide.
A few more tin-can antennae may be seen in the background, one of them a “Pringles” potato chip can. Although this can is of cardboard (paper) construction, its metallic inner lining provides the necessary conductivity to function as a waveguide. Some of the cans in the background still have their plastic lids in place. The plastic, being nonconductive, does not interfere with the RF signal, but functions as a physical barrier to prevent rain, snow, dust, and other physical contaminants from entering the waveguide. “Real” waveguide antennae use similar barriers to physically enclose the tube, yet allow electromagnetic energy to pass unimpeded.
- Waveguides are metal tubes functioning as “conduits” for carrying electromagnetic waves. They are practical only for signals of extremely high frequency, where the signal wavelength approaches the cross-sectional dimensions of the waveguide.
- Wave propagation through a waveguide may be classified into two broad categories: TE (Transverse Electric), or TM (Transverse Magnetic), depending on which field (electric or magnetic) is perpendicular (transverse) to the direction of wave travel. Wave travel along a standard, two-conductor transmission line is of the TEM (Transverse Electric and Magnetic) mode, where both fields are oriented perpendicular to the direction of travel. TEM mode is only possible with two conductors and cannot exist in a waveguide.
- A dead-ended waveguide serving as a resonant element in a microwave circuit is called a cavity resonator.
- A cavity resonator with an open end functions as a unidirectional antenna, sending or receiving RF energy to/from the direction of the open end. |
This resource incorporates several skills to practice. Children are able to practice and reinforce their recall of the Christmas Vocabulary words(Santa, Christmas, tree, reindeer, sack, wreath, stockings, fireplace, toys, elves, born, stable, presents, turkey, sleigh, list, snowflakes, bells, carols, children, Jesus, celebrate, birth)) by using the words in context.
The activities are differentiated to meet the differing needs within your classroom. Each activity has 3 levels of difficulty, which can be used to cater for the differing needs in the room or as a natural progression of fluency.
The activities include:
• Flashcards to introduce the vocabulary
• Jumbled sentences to order and match to a picture. Children can show their comprehension of the sentence through the choice of colours they use. (4 pages with 4 sentences on each)
• Reordering a simple sentence according to comprehension of a picture and sentence clue. (16 pages)
* Reordering a simple sentence according to comprehension of a picture clue. (16 pages)
• Reordering a sentence independently without a picture clue. Children are forced to rely on the capital letter and full stop. (16 pages)
By completing these activities the children are demonstrating their ability to recognise familiar sight vocabulary, comprehend what they have read, sequence a simple sentence using correct
If you would like the same resource with different vocabulary, please leave me a message in the feedback, comments section of the resource and I will make one up. |
(PROCESS OF FOOD
PRODUCTION BY PLANTS)
WHAT IS PHOTOSYNTHESIS
The process that occurs in green plants, whereby solar energy is
converted into chemical energy and stored as organic molecules by
making use of carbon dioxide, sunlight, and water. Water and Oxygen
are formed as byproducts.
can be summarized in the following equation:
6 CO2 + 12 H2O + Light energy
C6H12O6 + 6 O2 + 6 H2O
WHY DO PLANTS PHOTOSYNTHESIZE
provide nutrients and oxygen for heterotrophs.
are dependent on autotrophs, because
they cannot produce there own food.
WHO OR WHAT CAN PHOTOSYNTHESIZE
Green plants, algae, Plants
cyanobacteria and green protists.
Green, protists ,Algae , Cyanobacteria
WHAT PART OF THE PLANT IS
RESPONSIBLE FOR PHOTOSYNTHESIS
occurs in the chloroplasts of plant cells.
chloroplasts are mainly concentrated in the mesophyll cells of leaves.
contain chlorophyll – green pigment that absorbs sunlight.
fill the space in the thylakoid membrane.
RAW MATERIALS OF PHOTOSYNTHESIS
The raw materials of photosynthesis
carbon dioxide and
HOW RAW MATERIALS REACH THE
Water is absorbed through the root hair into the
xylem of the roots, into the xylem of the
stem, through the xylem of the leaves into the
mesophyll cells and finally into the chloroplasts
Carbon dioxide diffuses from the atmosphere through the
stomata, into the intercellular airspaces in the leaves, and finally
into the chloroplasts of the mesophyll cells.
The chlorophyll and other pigments in the thylakoid membrane
absorb the solar energy to drive photosynthesis
PHOTOSYNTHESIS CONSIST OF 2
LIGHT REACTION PHASE (Dependent on light)
DARK PHASE/ CALVIN CYCLE
LIGHT REACTION PHASE
place in the thylakoids of the chloroplasts.
absorbs solar energy from the sun.
a chlorophyll pigment absorbs light (photon of energy), it excites the
electrons, which goes from ground state to an excited state, which is unstable,
but can be used as potential energy.
unused excited e- fall back to the ground state, photons and heat are
THE ELECTRONS ARE EXCITED IN THE
PHOTOSYSTEMS FOUNT IN THE THYLAKOID
THIS POTENTIAL ENERGY IS THEN USED FIRSTLY TO SPLIT
WATER – INTO HYDROGEN & OXYGEN.
2H2 + O2
OXYGEN IS RELEASED AS A BYPRODUCT – DIFFUSE
THROUGH STOMATA INTO ATMOSPHERE.
THE HYDROGEN REDUCES NADP+ TO NADPH
SOME ENERGY IS THEN USED TO
PHOTOPHOSPHORYLATE ADP TO GENERATE ATP.
ADP + P
Carbon dioxide diffuses through the stomata of the leave and finally
into the stroma of the chloroplast.
The carbon dioxide is accepted by a 5C molecule called ribulose
biphosphate (RuBP) which then forms an unstable 6C compound.
6C compound dissociates into 2 x 3C compounds called
PGA is then reduced to phosphoglyceraldehyde (PGAL/ G3P) by
accepting a phosphate from ATP and a hydrogen electron from NADPH.
Thus changing ATP back to ADP and NADPH to NADP.
PGAL are now used for the following reactions:
Some PGAL are used to make RuBP again, so that the cycle can start over
Some PGAL are used to form hexose sugars like glucose and fructose.
Which combine to form disaccharides and polysaccharides.
* The carbohydrates can then be converted to other biological compounds
like proteins or fats by adding mineral salts like nitrates and phoshates
THE NATURE OF SUNLIGHT
is a form of energy = ELECTROMAGNETIC ENERGY/
electromagnetic energy travel in waves.
between crests of electromagnetic waves =
range from ≤ 1nm (gamma rays) –
≥ 1 km (radio waves)
entire range of radiation wavelengths =
THE MOST IMPORTANT PART FOR LIFE IS THE VISIBLE LIGHT
(380NM – 750NM)
WE CAN SEE THIS LIGHT AS VARIOUS COLOURS.
LIGHT CONSIST OF PARTICLES = PHOTONS
PHOTONS HAVE ENERGY- THE SHORTER THE WAVE
LENGTH THE GREATER THE ENERGY OF THE PHOTON.
THEREFORE VIOLET LIGHT HAS MORE ENERGY THAN RED
PHOTOSYNTHESIS ARE DRIVEN BY VISIBLE LIGHT OF THE
MAIN PIGMENTS USED DURING
a – Absorb violet, blue and red light. Reflects and transmits
green light (that is why plant leaves appear green)
b – Absorb violet, blue and red light. Reflects and transmits
green light (that is why plant leaves appear green).
– Play an accessory role in photosynthesis. They are shades
of yellow and orange and able to absorb light in the violet-blue-green
range. These pigments become noticeable in the fall when chlorophyll
HOW A PHOTOSYSTEM HARVESTS
thylakoid membrane of a chroloplast contains several
photosystem consist of a protein complex called a reactioncentre complex surrounded by several light harvesting
the diagram to understand the process of light
THE IMPORTANCE OF PHOTOSYNTHESIS: A
entering chloroplasts as sunlight gets stored as chemical energy
in organic compounds
made in the chloroplasts supplies chemical energy and carbon
skeletons to synthesize the organic molecules of cells.
store excess sugar as starch in structures such as
roots, tubers, seeds, and fruits
addition to food production, photosynthesis produces the O2 in our
by paprescott on Jan 29, 2012
by Rachel Hill, Sales Assistant at Affleck Sports on Jun 08, 2012 |
Beyond Social Promotion and Retention—Five Strategies to Help Students Succeed
This article takes the approach that if we avoid school failure in the first place, there might be less of a reason to consider retention. Specific “strategies” are described, including: intensifying learning, providing professional development to assure skilled teachers, expanding learning options, assessing students in a manner to assist teachers, and intervening in time to arrest poor performance.
Extensive research indicates that neither holding students back a grade nor promoting them unprepared fosters achievement. Studies indicate that retention negatively impacts students' behavior, attitude, and attendance. Social promotion undermines students' futures when they fail to develop critical study and job-related skills (Denton, 2001; U.S. Department of Education, 1999). In contrast, recent research and practice indicate that alternative strategies, which strike at the root causes of poor performance, offer genuine hope for helping all students succeed. These strategies are: intensify learning, provide professional development to assure skilled teachers, expand learning options, assess students in a manner to assist teachers, and intervene in time to arrest poor performance.
High-stakes testing and the accountability movement have catalyzed many states to end the practice of social promotion. Furthermore, opponents of retention point to years of research documenting its ineffectiveness. Because of the ineffectiveness of social promotion and retention, a search is on for better ways to help students improve their school performance. A review of current literature and practice suggests promising alternatives to both practices. These alternatives focus on preventing the failure cycle that results in poor performance so that social promotion and retention can segue into an effective, high-performance pentagon composed of intensified learning, skilled teachers, expanded learning options, assessment that informs teaching, and intervention — early and often.
Joan Forman and Mary Ellen Sanders, project coordinators of Naperville, Il District 203's early intervention program, "Project Leap," see the positive results early intervention can have.
The rate of retention in the U.S. is estimated at about 15 percent each year (National Association of School Psychologists [NASP], 1998). Overall retention rates have increased by 40 percent over the past 20 years, meaning that 30 to 50 percent of children have been retained at least once before the ninth grade (NASP, 1998; Owings & Magliaro, 1998; Sheppard & Smith, 1989; Thompson & Cunningham, 2000).
The highest retention rates are found among poor, minority, inner-city youth (NASP, 1998; Owings & Magliaro, 1998). Statistics also indicate that boys are retained more often than are girls (Thompson & Cunningham, 2000). English language learners; minority students; and children who have "late" birth dates, have attention problems, come from single-parent households, or experience frequent school changes are also most likely to be retained (Hartke, 1999; NASP, 1998). Such widespread practice might appear to indicate that grade retention results in increased achievement and is beneficial for most of the retained students. The preponderance of current data from a number of studies, however, indicates just how ineffective the practice of retention actually is.
Several studies have established the relationship between retention and later drop-out rates. Studies in both New York and Chicago showed that retained students were more likely to drop out than those promoted (Roderick, 1995). These results are echoed in other studies as well. The consensus is that retention, regardless of the grade in which it occurs, drastically increases the likelihood that children will drop out of school (Hauser, 1999; Holmes, 1989; NASP, 1998; Thompson & Cunningham, 2000). The effect of retention on dropout rates is no surprise considering that "retention is generally associated with poorer academic achievement when groups of retained children are compared to groups of similar children who are promoted" (NASP, 1998).
Some studies have shown gains in student achievement the first year after retention. Unfortunately the gains are small and diminish within three years (Hauser, 1999; Holmes, 1989; Karweit, 1991; NASP, 1998; Roderick, 1995; Thompson & Cunningham, 2000). Karweit (1991) notes "the consensus of several extensive reviews of grade retention is that there is not a positive effect for grade retention on academic achievement or on student personal adjustment" (p. 4).
How do children respond to retention? At the very least, it generates anxiety. One study of young children found that they "so feared retention they ranked it third in a list of worst anxieties, topped only by blindness and death of a parent" (Hartke, 1999).
The National Association of School Psychologists (1998) notes that retention is linked to increased behavior problems that become more pronounced as children reach adolescence. Other work in this area has found an impact on attendance and attitude as well (Holmes, 1989).
Social promotion research
Social promotion is the practice of advancing students to the next grade even when they have not mastered the material in their current grade (Denton, 2001; U.S. Department of Education, 1999). Research confirms that social promotion — similar to retention — increases drop-out rates, does nothing to increase student achievement, and creates graduates who lack the necessary skills for employment (Denton, 2001; U.S. Department of Education, 1999). "Both being promoted without regard to effort or achievement or retained without extra assistance sends a message to students that little is expected from them, that they have little worth, and they do not warrant the time and effort it would take to help them be successful in school" (U.S. Department of Education, 1999).
Early intervention offers long-term self-esteem benefits, according to Mary Ellen Sanders, project co-coordinator of Naperville, Il District 203's early intervention program, "Project Leap."
Grade retention and social promotion are inadequate responses to low student achievement because they are not preventive. "Social promotion and retention both try to remedy problems after they've occurred, rather than preventing them or nipping them in the bud," says Wheelock (1998). A study by Karweit (1991) concluded, "Neither retention nor social promotion are satisfactory responses to the need to provide appropriate instruction for low-performing students" (p. iii). There are no positive outcomes for students when using either practice. "The results of both policies are unacceptable high dropout rates, especially for poor and minority students, and inadequate knowledge and skills for students," notes the U.S. Department of Education (1999). "Neither practice closes the learning gap for low-achieving students, and neither is an appropriate response to the academic needs of students experiencing difficulty mastering required coursework."
Instead of relegating low-performing students to social promotion or grade retention, the American Federation of Teachers (1997), Darling-Hammond (1998), McCollum, Cortez, Maroney, and Montes (1999), and Wheelock (1998) support the development of alternative approaches so that all students can succeed in school. Such alternative approaches — which provide high-achieving environments as well as support and assistance for students-involve school policies and procedures built on the five interrelated strategies that follow.
Strategy one: Intensify learning
Research indicates what educators know from experience: Making assignments easier is no solution to poor performance. Simpler lessons offer no assurance that students will achieve better test scores. Intensified learning, on the other hand, affords better results. A recent study conducted by the Consortium on Chicago School Research underscores the assertion that students who are given more challenging, critical-thinking, higher-quality, tougher assignments outperform less-challenged students on standardized tests (Newmann, Bryk & Nagaoka, 2001, January).
The Consortium studies examples of urban school improvement and assesses the progress of school reform. One of the Consortium's studies, supported by the Chicago Annenberg Challenge, studies examples of urban school improvement and assesses the progress of school reform. One of the Consortium's studies examined students in 19 Chicago elementary schools who were given intellectually stimulating assignments in mathematics and writing. Over a three-year period, the progress of more than 5,000 students in Grades 3, 6, and 8 was followed. Students who received more challenging, intellectual assignments showed greater than average gains on the Iowa Tests of Basic Skills in reading and mathematics and demonstrated higher performance in reading, mathematics, and writing on the Illinois Goals Assessment Program (Newmann, Bryk & Nagaoka, 2001, January). Students in some very disadvantaged Chicago classrooms were given intellectually challenging assignments, and contrary to some expectations, these children benefited from exposure to such instruction. The study suggests that if teachers, administrators, policymakers, and the public at-large place more emphasis on authentic intellectual work in classrooms, yearly gains on standardized tests in Chicago could surpass national norms.
Intensifying learning helps build high-achieving schools, which in turn are most likely to produce successful, high-achieving students. High-achieving schools are rigorous schools. They develop rigorous standards, a rich curriculum, knowledgeable and skilled teachers, and meaningful learning experiences as essential elements (Wheelock, 1998).
Having a clearly defined set of standards helps teachers concentrate on instruction, makes clear to students and parents grade level expectations, and ensures that students are prepared for the next grade. Most states currently have standards in place for students in grades K-12. Studies of high-achieving schools with disadvantaged student populations revealed that integrating learning standards with demanding coursework and high expectations led to a marked improvement in student performance (U.S. Department of Education, 1999). Integrating standards into the curriculum is the first step for schools that are working to create high-achieving learning environments for their students (Pattison & Berkas, 2000; U.S. Department of Education, 1999).
Students in Chicago classrooms where challenging assignments were the norm showed a one-year learning gain over those in Chicago classrooms where the intellectual quality of assignments was low. Additionally, their test results were higher than the national norms. These children, who received intellectually stimulating assignments, posted learning gains 20 percent greater than the national average. In Chicago classrooms where assignments was less challenging, students gained 25 percent less than the national average in reading and 22 percent less in mathematics (Newmann, Bryk & Nagaoka, 2001, January).
Hiring effective and well-trained teachers is one of the most important measures schools can take to intensify learning for all students. Outside of the home environment, teachers are the number-one resource in helping students succeed. According to the National Commission on Teaching and America's Future (Darling-Hammond, 1997), teacher expertise has a direct correlation to high student achievement. "Students who have highly effective teachers three years in a row score as much as 50 percentile points higher on achievement tests than those who have ineffective teachers for three years in a row," states Darling-Hammond (1998). Effective teachers "know the content they are teaching, engage students in learning, and challenge them to greater accomplishments" (U.S. Department of Education, 1999).
Skilled teachers intensify learning by providing authentic instruction and meaningful assignments while holding high expectations for all students. Such assignments deal with the significant concepts of a discipline, incorporate higher-order thinking skills, are connected to the "real world," and allow substantial time for discussion and idea sharing among students (Peterson, 1995). Furthermore, teachers can employ several learning models to create active learning environments that reflect a shift in the relationships among teachers, students, and knowledge. In these environments, students work together to frame their own questions and investigate them. Active environments require collaboration and communication, and encourage more analysis, synthesis, and evaluation of information than do traditional classrooms (North Central Regional Educational Laboratory, 2000). Active learning environments require students to take responsibility for their own learning and develop strategies for learning (Costello, 1996). Instruction in active environments emphasizes depth of learning rather than breadth of learning (Peterson, 1995).
Teachers and researchers participating in a longitudinal research study conducted by Apple Computer, Inc. found that high levels of student involvement in learning occurred most often in classrooms that encouraged active learning. In the Apple Classrooms of Tomorrow, students were encouraged to frame their own questions and were urged to follow up on them. The students frequently worked in groups, and the atmosphere was a collaborative one — among students as well as between students and teachers (North Central Regional Educational Laboratory, 2000).
Strategy two: Provide professional development to ensure skilled teachers
High — quality professional development is intricately linked to improved teaching and learning. Studies conducted by Ronald Ferguson revealed that "every dollar spent on more highly qualified teachers netted greater increases in student achievement than did less instructionally focused uses of school resources" (Darling-Hammond, 1997, p. 8). In addition, reviews of more than 200 studies make it clear that teacher education is critical and that more appears to be better than less (Darling-Hammond, 1997). "In fields ranging from mathematics and science to early childhood, elementary, vocational, and gifted education, teachers who are fully prepared and certified in both their discipline and in education are more highly rated and are more successful with students than are teachers without preparation, and those with greater training in learning, child development, teaching methods, and curriculum are found to be more effective than those with less" (Darling-Hammond, 1997, p. 10).
Current information gathered from numerous recent studies also indicates that professional development proved more effective when it involved teachers working with colleagues on integrating standards and revising curriculum, working with diverse populations, and changing forms of student assessment (Cook & Fine, 1997; Darling-Hammond, 1997). Darling-Hammond (1997) recommends organizing teacher professional development around standards for students and teachers, creating and funding mentoring programs for beginning teachers, allocating state and local spending to support high-quality professional development, and embedding professional development in the daily work of teachers through joint planning, study groups, peer coaching, and research.
Teaching is a complex activity that requires substantial time to implement, assess, and refine instructional techniques. Finding time for such activities as study groups, action research, coaching, and collaboration must be a priority for all schools (Cook & Fine, 1997; Darling-Hammond, 1997).
Professional development must become a part of teachers' daily lives. By ensuring that their teachers are exposed to professional development opportunities, schools can realize new learning for all students and ensure teaching that is responsive to a wide range of student needs.
Strategy three: Expand learning options
With the diverse population of students in schools today, educators must strive to create a system that reflects and celebrates diversity and allows children to reach high standards. Educators can create new paths to learning standards by providing more learning options for students. Not all children learn in the same way, or in the same time. By offering more routes to the standards, teachers enable more children to reach them.
One way schools can create expanded learning paths is through flexible scheduling. By reorganizing the school day or school year, educators can more effectively use time to support all learners and participate in ongoing professional development. Block scheduling offers flexibility for schools to meet their unique needs, and many models exist. There are a number of advantages in using block scheduling: Students can be exposed to a variety of instruction techniques; they may experience improved grades, improved test scores, and improved attendance; students given longer lunch periods can get extra help with their schoolwork; and teachers can have longer prep times, which increases the opportunity for teamwork and integrated professional development activities.
Reorganizing the school year is a strategy gaining popularity across the country. There are several models for year-round schooling, all of which involve modifying the school calendar so that learning occurs in more consistent chunks throughout the year. The basic premise behind year-round calendars is to shorten the lengthy summer break and schedule more frequent breaks throughout the year. The main advantages of this tactic are reducing the amount of summer learning loss, which requires substantial time and review each fall to recover (Ballinger, 1995), and increasing student achievement (Ballinger, 1995; Center for Applied Research, 1999; Staff Development for Educators, 2000). In addition, year-round schooling provides support to diverse populations of students and offers the following benefits: improved student and teacher attendance, fewer discipline problems, reduced teacher stress, increased student and teacher motivation, and increased opportunities for enrichment and remediation during breaks (Ballinger, 1995; Center for Applied Research, 1999; Staff Development for Educators, 2000).
Teachers can expand learning options by reorganizing or differentiating instruction. "At its most basic level, differentiation consists of the efforts of teachers to respond to variance among learners in the classroom. Whenever a teacher reaches out to an individual or small group to vary his or her teaching in order to create the best learning experience possible, that teacher is differentiating instruction" (Tomlinson, 2000, p. 2). Teachers can differentiate at least four classroom elements: content, process, products, and the learning environment. How and what the teacher chooses to differentiate is based on student readiness and interest (Tomlinson, 1999; 2000). Several research-based practices support differentiating instruction: flexible grouping, cooperative learning, multiple intelligences, and brain-based learning. The success of differentiation rests on several key principles:
- Differentiation must occur with high-quality curriculum and instruction.
- Assessment and instruction are inseparable.
- All students participate in respectful work.
- The teacher understands, appreciates, and builds upon student differences (Tomlinson, 1999).
Data collection and analysis empowers effective program management, says Mary Ellen Sanders, project co-coordinator of Naperville, Il District 203's early intervention program, "Project Leap."
To expand learning options, two methods of reorganizing class groupings are effective: multiage grouping (in which children of different ages are grouped in a single classroom and remain with the same teacher for more than one year) and looping (in which a teacher stays with a class of children for two or more grade levels). When taught by skilled teachers who are trained to work with mixed age and ability groupings, multiage classrooms can accommodate variations in learning style, performance, paces of learning, and they can foster sustained, caring relationships (Darling-Hammond, 1998; NASP 1998). "Studies show that children in multiage classrooms show academic progress over time that equals or exceeds that of their peers in same-age classrooms" (Darling-Hammond, 1998, p. 20). Teachers working in multiage classrooms can maximize learning time, because they know their students' learning and social needs at the beginning of each year. No time is wasted on long review periods.
Looping allows teachers and children to stay together for longer periods of time and reaps many of the same benefits seen in multiage grouping. In addition, studies indicate a positive impact on achievement. One looping study was conducted in East Cleveland, Ohio, in a school district with 99.4 percent African American students, most from single-family homes, and one-half living at or below the poverty line. The researchers compared achievement scores in reading and math between children in looping classes and those in traditional classes at the end of the first looping cycle. They found significant differences between the two groups — in some cases as much as a 40-point difference in favor of the looping students (Reynolds, Barnhart, & Martin, 1999).
An option more practical for upper levels of public education is to organize students and teachers into teams that stay together for a few years, as is done at Central East Middle School in Philadelphia (Wheelock, 1998). This approach allows teachers to become more familiar with their students' strengths, learning styles, and problem areas. It also gives teachers enough time to help their students meet learning goals.
Smaller class size is another reorganizational learning strategy that lets teachers work more efficiently with students who need extra assistance. By having smaller classes, teachers are better able to get to know their students, to share information, and to develop strategies for helping them succeed. Research has shown that classes with fewer than 20 children can improve students' academic achievements and are particularly beneficial for disadvantaged students (U.S. Department of Education, 1999). Project STAR, an extensive study conducted in Tennessee, provides valuable insight for educators about the effects of class size. Project STAR demonstrated that students in smaller classes outperform students in larger classes on both standardized and curriculum-based measures. These results held true for students regardless of race, socioeconomic status, or school type (urban, rural, big, small, etc). Follow-up research indicates the results continue through the eighth grade (U.S. Department of Education, 1999). Before creating smaller classes, educators should consider the following research-based guidelines: Smaller class size works best in the primary grades and with disadvantaged and minority students; professional development is key to the success of smaller classes; and smaller classes must be accompanied by other prevention and intervention strategies to end social promotion (American Federation of Teachers, 1997; U.S. Department of Education, 1999).
Strategy four: Assess to inform teachers
The role of assessment in instruction must not be overlooked. The primary aim of assessment is to foster worthwhile learning for all students (Porter, 1995) by guiding classroom instruction. Assessments that provide detailed information about students' academic progress, including what they know, what they can do, how they learn, and where they are having problems, can ensure that children's instructional needs are met. McCollum et al., (1999) recommend the use of performance assessments and informal assessment tools (such as rubrics, checklists, and anecdotal records) to guide instruction and better inform teaching. Such assessments provide information about the way children think, what they understand, and the strategies they use in their learning (Darling-Hammond, 1998). Many educators feel that performance-based assessments best reflect new educational standards and methods of instruction (Porter, 1995) and are promising for ensuring equity with assessment. To be truly effective, alternative, performance-based assessments should be continuous throughout the school year. Student assessments must be ongoing and feed into daily decisions that teachers make regarding appropriate instruction and student assistance (American Federation of Teachers, 1997).
Strategy five: Intervene early and often
"If students are to be held more accountable for their academic performance and held to high educational standards, schools must provide adequate opportunities for students to meet expectations on time" (U.S. Department of Education, 1999). Ongoing and diagnostic assessment help schools develop intervention strategies that stop the cycle of failure and that accelerate learning.
The keys to such intervention strategies are identifying children early on who need extra help and providing a number of ways for students to receive support. For example, early reading intervention programs can provide intensive support at the onset of a child's school career. Such programs are of particular importance since most children in the early grades are retained based on their reading achievement. There is growing evidence that such programs can prevent problems from occurring in later grades (Illinois State Board of Education, 2000; Pikulski, 1998). Hallmarks of successful early intervention programs are those that incorporate the following actions:
- Offered early.
- Tied to the work students are doing as a normal part of the school routine.
- Offered on a regular and frequent basis.
- Supplement classroom instruction — not just repeat it.
- Multifaceted and based on individual needs.
- Provided by someone who understands the content and the students' problems.
- Paced so as to accelerate the pace of learning.
- Set up with strong quality controls and monitoring to ensure that the extra help and time are working (American Federation of Teachers, 1997; Darling-Hammond, 1998; Denton, 2001; Illinois State Board of Education, 2000; Pikulski, 1998; Wheelock, 1998).
Tangible results of early intervention are evident in student demeanor and behavior according to Joan Forman, project co-coordinator of Naperville, Il District 203's early intervention program, "Project Leap."
In addition to early intervention, schools need to give children different ways to achieve success. Offering an array of intensive intervention throughout the grades schools will ensure that support is available to children who were not identified early, who recently moved into the system, or who need extended opportunities to succeed. "According to research, one of the most effective, standards-aligned intervention methods is to increase the instructional time for struggling students, especially intensive instruction delivered by a trained adult" (American Federation of Teachers, 1997). Extending learning time for students can happen in several ways. Schools can use flexible and creative scheduling during school hours or extra time outside of the regular school day (Denton, 2001), such as before or after school programs, Saturday school, or summer school.
Regardless of how schools extend time, numerous options exist for using it effectively:
- Offering classes on study skills and corresponding programs to help parents encourage study skills in the home.
- Providing one-on-one tutoring with a teacher or cross-age tutoring with an older student.
- Adding an extra period in the problem subject area (double-dosing).
- Providing consultation by school teams.
- Offering individualized education plans.
- Giving special assistance and targeted services for students with learning disabilities and other special needs.
- Improving service delivery models for students and families who would benefit from school-linked integrated services.
(For more information on educating children with special needs or children who are at risk of school failure, refer to the Critical Issues Meeting the Diverse Needs of Young Children and Providing Effective Schooling for Students at Risk.)
Educators who "raise the bar" with mandatory educational standards must take care to provide nurturing educational environments that support all learners. Neither social promotion nor grade retention is an effective remedy for low student achievement. Instead, schools must ensure that all students have opportunities for learning as well as support and assistance. Through the use of school structures and policies that support intensive learning, professional development for teachers, expanded learning options, assessments that inform teaching, and intervention strategies, schools can play a critical role in breaking the cycle of failure while helping children reach their full academic potential. This not only helps students enjoy success during their school years but also instills confidence in their personal lives.
- The teaching staff is given the chance to participate in professional development opportunities.
- Challenging coursework is offered to develop high-achieving students.
- Assessments identify areas where learning problems exist.
- Learning is supported by expanded learning programs, such as lower class size at the primary level, structures that group children and teachers together for longer periods of time, and year-round schools.
- Students have multiple opportunities to learn through extended learning time, differentiated instruction, early intervention, and ongoing assessment.
- Early intervention programs stop the cycle of failure and accelerate learning.
Administrators can take the following steps to produce high-achieving schools:
- Create professional development plans to ensure that teachers receive best practices training.
- Provide time for teachers to work together and coach each other in applying effective instructional techniques.
- Group teachers and children for longer periods through looping, multiage grouping, and team grouping.
- Hire reading specialists to address the needs of struggling readers — especially in the early grades.
- Hire highly trained teachers to provide intervention for at-risk populations.
- Provide high-quality summer school programs with follow-up intervention during the school year.
Teachers can do the following to bring about successful learning environments:
- Use creative and flexible scheduling to extend learning time for students who need it.
- Create classrooms that accommodate different learning styles.
- Use ongoing, performance-based assessment to guide daily teaching decisions.
- Create intervention programs that accelerate learning and extend learning time for students.
School districts encounter many challenges in their efforts to support all learners. Two of the biggest challenges identified by educators are: finding time for professional development and building support for higher standards (Eisner, 2000). In addition, ensuring that there are adequate space and resources for extended time programs and intensive interventions can be overwhelming. As it is, many schools find themselves hard pressed to provide instructional space for regular programs.
Schools are frequently forced to use hallways, storage areas, and other areas that are not suitable learning environments. Space is perhaps one of the biggest obstacles in reducing class size at the lower elementary levels as well. Cutting class sizes means that students removed from one classroom have to go into another. But buildings that are already full simply don't have the space to create additional classrooms.
School districts in this country face the growing challenge of hiring highly trained teachers. The number of graduates from teacher training programs has been steadily declining over the last several years. As a result, schools in many parts of the country — particularly in California— have no choice but to hire under — prepared teachers with emergency credentials (Eisner, 2000). This is particularly important when considering the need for specially trained teachers to work in intervention programs.
Space and teacher availability can have a far-reaching impact on a school's overall ability to employ prevention and intervention strategies. Schools must also not become too reliant on a single prevention or intervention strategy. Although it is wise to start small and build on success, comprehensive development of a range of strategies should be the overriding goal of a district.
Different points of view
"Historically educators have viewed retention as a means of reducing skill variance in the classroom in an attempt to better meet student needs" (Owings & Magliaro, 1998). Many educators look at retention as an opportunity for students to mature, to be successful with material they've struggled with, and to be better prepared to move on through the school system.
Social promotion evolved as a response to the "ills" of retention and traces its roots to the 1930s. It is still in practice today. Well-meaning educators concerned with protecting students from the harmful effects of retention and school districts overwhelmed by under-achieving students regard social promotion as necessary and unavoidable (Di Maria, 1999).
In light of research that exists, however, educators can no longer afford to hang on to these views. "One indicator of a profession is that a body of research guides its practice. A body of research exists on the subject of retention (and social promotion) and it should guide our practice" (Owings & Magliaro, 1998).
For alternative school structures
Congress extended year-round school in Milwaukee, Wisconsin uses a trimester system with longer breaks during the year and adds 16 days to the school calendar. Congress has a well-developed parent involvement plan, diverse opportunities for professional development, and an after-school program that offers academic and recreational opportunities until 6:00 p.m. daily. Congress has a voluntary uniform policy and a School to Career focus.
Lincoln School in Mundelein, Illinois is a K-5 multiage school that uses technology, multiple intelligences, problem-based learning, and a year-round calendar.
Lincoln Prairie School in Hoffman Estates, Illinois is a pre-K-8 school. It follows a traditional school calendar, and students are taught in multiage groups taking part in authentic curriculum studies. Personalized learning plans foster students taking responsibility for their own learning. Teachers, as facilitators, engage students in critical and analytical thinking, encourage collaborative work, and provide opportunities for students to demonstrate their learning in a variety of projects and products. The school facility supports interactive learning.
Gordon Middle School in Coatesville, Pennsylvania offers the Sparks After-School program, which gives students extra academic support. Most of the students in the program come from either dual-income or single-parent families who struggle to balance work, school, and home lives. Students are given the opportunity to obtain help with their schoolwork in a safe and structured environment. Funded by a Federal grant administered by an independent agency, the after-school program is staffed by teachers, along with volunteers. The Sparks program has been instrumental in building students' interest in school. When it started in 1999, thirty academically or socially at-risk students participated. The enrollment in the program has more than doubled, and students once at risk are now doing well academically and joining extracurricular after-school activities in unprecedented numbers.
Click the "References" link above to hide these references.
American Federation of Teachers. (1997). Passing on failure: District promotion policies and practices. Washington, DC: Author.
Ballinger, C. (1995, November). Prisoners no more. Educational Leadership, 53(3), 28-31.
Center for Applied Research and Educational Improvement, University of Minnesota. (1999). Alternative calendars: Extended learning and year-round programs. Available online: http://education.umn.edu/CAREI/Reports/AltCalendars.pdf
Center for Policy Research in Education. (1990, January). Repeating grades in school: Current practice and research evidence. Available online: http://www.cpre.org/Publications/rb04.pdf
Cook, C. & Fine, C. (1997). Finding time for professional development. Pathways to School Improvement. Available online: http://www.ncrel.org/sdrs/areas/issues/educatrs/profdevl/pd300.htm
Costello, M. (1996). Providing effective schooling for students at risk. Pathways to School Improvement. Available online: http://www.ncrel.org/sdrs/areas/issues/students/atrisk/at600.htm
Darling-Hammond, L. (1997, November). Doing what matters most: Investing in quality teaching. New York: National Commission on Teaching & America's Future. Available online: http://documents.nctaf.achieve3000.com/WhatMattersMost.pdf
Darling-Hammond, L. (1998, August). Alternatives to grade retention. The School Administrator, 55(7), 18-21. Available online: http://www.aasa.org/publications/sa/1998_08/Darling-Hammond.htm
Denton, D. (2001, January). Finding alternatives to failure: Can states end social promotion and reduce retention rates? Available online: http://www.sreb.org/programs/srr/pubs/alternatives/AlternativesToFailure...
Di Maria, M. (1999). Issues of social promotion. New York: Educational Resources Information Center. (ERIC Document Reproduction Service No. ED 437 208)
Dounay, J. (1999, August). State student promotion/retention policies. Available online: http://www.ecs.org/clearinghouse/18/27/1827.pdf
Eisner, C. (Ed.). (2000, October). Ending social promotion: Early lessons learned. Washington, DC: U.S. Department of Education & Council of the Great City Schools.
Hartke, K. (1999, January/February). The misuse of tests for retention. Thrust For Educational Leadership, 28(3), 22-24.
Harrington-Lueker, D. (2000, March). Summer learners. American School Board Journal, 187(3), 20-25.
Hauser, R. (1999). Should we end social promotion? Truth and consequences (CDE Working Paper No. 99-06). Madison, WI: Center for Demography and Ecology, University of Wisconsin-Madison.
Holmes, C. T. (1989). Grade-level retention effects: A meta-analysis of research studies. In L.A. Shepard & M.L. Smith (Eds.), Flunking grades: Research and policies on retention (pp. 16-33). Philadelphia: Falmer Press.
Illinois State Board of Education. (2000). Early reading intervention: A primer for school administrators and education policy makers [Pamphlet]. Springfield, IL: Author. Available online: http://www.illinoisreads.net/htmls/kit_resources/early_intervention.pdf
Karweit, N. L. (1991, May). Repeating a grade: Time to grow or denial of opportunity? (Report No. 16). Baltimore: Center for Research on Effective Schooling for Disadvantaged Students.
Kelly, K. (1999, January/February). Retention vs. social promotion: Schools search for alternatives. Available online: http://www.edletter.org/past/issues/1999-jf/retention.shtml
McCollum, P., Cortez, A., Maroney, O., & Montes, F. (1999). Failing our children: Finding alternatives to in-grade retention (Policy Brief). San Antonio, TX: Intercultural Development Research Association. Available online: http://www.idra.org/Research/ingrade.pdf
National Association of School Psychologists. (1998). Student grade retention and social promotion. Available online: http://www.nasponline.org/information/pospaper_graderetent.html
Newmann, F.M., Bryk, A.S. & Nagaoka, J.K. (2001, January). Authentic Intellectual Work and Standardized Tests: Conflict or Coexistence? Chicago: Consortium on Chicago School Research.
North Central Regional Educational Laboratory. (2000). Indicator: Engaging learning environments. Available online: http://www.ncrel.org/engauge/framewk/efp/environ/efpenvin.htm
Opitz, M. F. (1998). Flexible grouping in reading. New York: Scholastic Books.
Owings, W. & Magliaro, S. (1998, September). Grade retention: A history of failure. Educational Leadership, 56(1), 86-88. Available online: http://www.ldonline.org/ld_indepth/legal_legislative/grade_retention.htm...
Pattison, C. & Berkas, N. (2000). Integrating standards into the curriculum. Pathways to School Improvement. Available online: http://www.ncrel.org/sdrs/areas/issues/content/currclum/cu300.htm
Peterson, K. (1995). Creating high-achieving learning environments. Pathways to School Improvement. Available online: http://www.ncrel.org/sdrs/areas/issues/educatrs/leadrshp/le400.htm
Pikulski, J. (1998). Preventing reading problems: Factors common to successful early intervention programs. Available online: http://www.eduplace.com/rdg/res/prevent.html
Porter, A. (1995). Integrating assessment and instruction in ways that support learning. Pathways to School Improvement. Available online: http://www.ncrel.org/sdrs/areas/issues/methods/assment/as500.htm
Reynolds, A. J. (1992, Summer). Grade retention and school adjustment: An explanatory analysis. Educational Evaluation and Policy Analysis, 14(2), 101-121.
Reynolds, J., Barnhart, B., & Martin, B. (1999). Looping: A solution to the retention vs. social promotion dilemma? ERS Spectrum, 17(2), 16-20.
Roderick, M. (1995, December). Grade retention and school dropout: Policy debate and research questions. Available online: http://www.pdkintl.org/edres/resbul15.htm
Rodriguez, G., Caplan, J., & Helm, J.H. (1998). Meeting the Diverse Needs of Young Children. Pathways to School Improvement. Available online: http://www.ncrel.org/sdrs/areas/issues/students/earlycld/ea400.htm
Rumberger, R. W. (1987, Summer). High school dropouts: A review of issues and evidence. Review of Educational Research, 57(2), 101-122.
Shepard, L. A. & Smith, M. L. (1987, October). Effects of kindergarten retention at the end of first grade. Psychology in the Schools, 24(4), 346-357.
Staff Development for Educators. (2000). Prisoners of time: Too much to teach, not enough time to teach it. Peterborough, NH: Crystal Springs Books.
Thompson, C., & Cunningham, E. (2000). Retention and social promotion: Research and implications for policy. ERIC Digest. New York: ERIC Clearinghouse on Urban Education. (ERIC Document Reproduction Service No. ED 449 241)
Tomlinson, C. A. (1999). The differentiated classroom: Responding to the needs of all learners. Alexandria, VA: Association for Supervision and Curriculum Development.
Tomlinson, C. A. (2000). Differentiation of instruction in the elementary grades. ERIC Digest. (ERIC Document Reproduction Service No. ED 443 572). Available online: http://www.eric.ed.gov/contentdelivery/servlet/ERICServlet?accno=ED443572
U.S. Department of Education. (1999, May). Taking responsibility for ending social promotion: A guide for educators and state and local officials. Available online: http://www.ed.gov/pubs/socialpromotion/index.html
Wheelock, A. (1998). Extra help and support to meet standards and prevent grade retention. Available online: http://wwwcsteep.bc.edu/ctestweb/retention/retention2.html |
Behavioural and Neuroscientific Methods were invented by Shay M. Anderson and are used to get a better understanding of how our brain influences the way we think, feel, and act. There are many different methods which help us to analyze the brain and as well to give us an overview of the relationship between brain and behaviour.Well-known techniques are the EEG (Electroencephalography) which records the brain’s electrical activity and the fMRI (functional magnetic resonance imaging) method which tells us more about brain functions. Other methods, such as the lesion method, are not as well-known but still very influential in today's neuroscientific research.
Methods can be summed up in the following categories: There are techniques for assessing brain anatomy and others for assessing physiological functions. Furthermore there are techniques for modulating brain activity, analyzing behaviour or for modeling brain-behaviour. In the lesion method, patients with brain damage are examined to determine which brain structures were damaged and to that extent this influences the patient's behaviour.
The concept of the lesion method is based on the idea to find a correlation between a specific brain area and an occurring behaviour. From experiences and research observations it can be concluded that the loss of a brain part causes behavioural changes or interfere in performing a specific task. This can be noted in such a way that a patient with a lesion in the parietal-temporal-occipital association area has an agraphia, that means that he is not able to write although he has no deficits in motor skills. Consequently generally speaking researchers deduce that if structure X is damaged and changes in behaviour Y occur X has a relation to Y.
In humans lesion are often caused by tumours or strokes. With the upcoming methods it is possible to determine which area was damaged for example by a stroke and therefore deduce a relation between the loss of the ability to speak and this specific damaged brain area. Lesions caused purposely in the laboratory with animals offer a lot of advantages.
First the animals did all grow up in the same environment and have the same age when the surgery is performed. Second on each animal a before-after comparison of performing a task can be observed. And third the control groups can be watched who either did not undergo surgery or who did have surgery in another brain area. These benefits also increase the accuracy of the hypothesis being tested which is more difficult in human research because the before-after comparison and control experiments drop out.
In order to upgrade the probability of the hypothetical relationship between a brain area and a task performance a method called double dissociation is carried out. The goal of this method is to prove if two dissociations are independent. More precisely if two patients have each a brain lesion and they show a contradictory disease pattern the ambition of the scientists will be to prove that the two tasks are realized in two different brain areas. Lesions in the Broca-, respectively Wernicke area can serve as an example. The Broca area in the brain is responsible for language processing, comprehension and speech production. Patients with a lesion in this area have a brain damage called Broca's aphasia or non-fluent aphasia. They are not able to speak fluently any more, a sentence produced by them could be: I ... er ... wanted ... ah ... well ... I ... wanted to ... er ... go surfing ... and ..er ... well... Contradictory Wernicke's area is responsible for analysing spoken language. A patient with a lesion in this area has a so-called Wernicke's aphasia. He is able to hear language but is no longer able to understand it and therefore cannot produce any meaningful sentences any more. He talks 'word salad', like for instance: ' I then did this chingo for some hours after my dazi went through meek and been sharko'. A difficulty which occurs with Wernicke's aphasia patients is that they are often not aware of their lack of ability to speak correctly because they cannot understand what they are saying and think they are holding a normal conversation.
Certainly one of the famous "lesion" cases was that of Phineas Gage. On 13 September 1848 Gage, a railroad construction foreman, was using an iron rod to tamp an explosive charge into a body of rock when premature explosion of the charge blew the rod through his left jaw and out the top of his head. Miraculously, Gage survived, but reportedly underwent a dramatic personality change as a result of destruction of one or both of his frontal lobes. The uniqueness of Gage case (and the ethical impossibility of repeating the treatment in other patients) makes it difficult to draw generalizations from it, but it does illustrate the core idea behind the lesion method. Further problems stem from the persistent distortions in published accounts of Gage—see the Wikipedia article Phineas Gage.
Techniques for Assessing Brain Anatomy / Physiological FunctionEdit
CAT scanning was invented in 1972 by the British engineer Godfey N. Hounsfield and the South African (later American) physicist Alan Cromack.
CAT (Computed Axial Tomography) is an x-ray procedure which combines many x-ray images with the aid of a computer to generate cross-sectional views, and when needed 3D images of the internal organs and structures of the human body. A large donut-shaped x-ray machine takes x-ray image at many different angles around the body. Those images are processed by a computer to produce cross-sectional picture of the body. In each of these pictures the body is seen as an x-ray ‘slice’ of the body, which is recorded on a film. This recorded image is called tomogram.
CAT scans are performed to analyze, for example, the head, where traumatic injuries (such as blood clots or skull fractures), tumors, and infections can be identified. In the spine the bony structure of the vertebrae can be accurately defined, as can the anatomy of the spinal cord. ATC scans are also extremely helpful in defining body organ anatomy, including visualizing the liver, gallbladder, pancreas, spleen, aorta, kidneys, uterus, and ovaries. The amount of radiation a person receives during CAT scan is minimal. In men and non-pregnant women it has not been shown to produce any adverse effects. However, doing a CAT test hides some risks. If the subject or the patient is pregnant it maybe recommended to do another type of exam to reduce the possible risk of exposing her fetus to radiation. Also in cases of asthma or allergies it is also recommended to avoid this type of scanning. Since the CAT scan requires a contrast medium, there's a slight risk of an allergic reaction to the contrast medium. Having certain medical conditions; Diabetes, asthma, heart disease, kidney problems or thyroid conditions also increases the risk of a reaction to contrast medium.
Although CAT scanning was a breakthrough, in many cases it was substituted by Magnetic resonance imaging (also known as MRI) since magnetic resonance imaging is a method of looking inside the body without using x-rays, harmful dyes or surgery. Instead, radio waves and a strong magnetic field are used in order to provide remarkably clear and detailed pictures of internal organs and tissues.
History and Development of MRI
MRI is based on a physics phenomenon, called nuclear magnetic resonance (NMR), which was discovered in the 1930s by Felix Bloch (working at Stanford university) and Edward Purcell (from Harvard University). In this resonance, magnetic field and radio waves cause atoms to give off tiny radio signals. In the year 1970, Raymond Damadian, a medical doctor and research scientist, discovered the basis for using magnetic resonance imaging as a tool for medical diagnosis. Four years later a patent was granted, which was the worlds first patent issued in the field of MRI. In 1977, Dr. Damadian completed the construction of the first “whole-body” MRI scanner, which he called the ”Indomitable”. The medical use of magnetic resonance imaging has developed rapidly. The first MRI equipment in health was available at the beginning of the 1980s. In 2002, approximately 22000 MRI scanners were in use worldwide, and more than 60 million MRI examinations were performed.
Common Uses of the MRI Procedure
Because of its detailed and clear pictures, MRI is widely used to diagnose sports-related injuries, especially those affecting the knee, elbow, shoulder, hip and wrist. Furthermore, MRI of the heart, aorta and blood vessels is a fast, non-invasive tool for diagnosing artery disease and heart problems. The doctors can even examine the size of the heart-chambers and determine the extent of damage caused by a heart disease or a heart attack. Organs like lungs, liver or spleen can also be examined in high detail with MRI. Because no radiation exposure is involved, MRI is often the preferred diagnostic tool for examination of the male and female reproductive systems, pelvis and hips and the bladder.
An undetected metal implant may be affected by the strong magnetic field. MRI is generally avoided in the first 12 weeks of pregnancy. Scientists usually use other methods of imaging, such as ultrasound, on pregnant women unless there is a strong medical reason to use MRI.
There has been some further development of the MRI: The DT-MRI (diffusion tensor magnetic resonance imaging) enables the measurement of the restricted diffusion of water in tissue and gives a 3-dimensional image of it. History: The principle of using a magnetic field to measure diffusion was already described in 1965 by the chemist Edward O. Stejskal and John E. Tanner. After the development of the MRI, Michael Moseley introduced the principle into MR Imaging in 1984 and further fundamental work was done by Dennis LeBihan in 1985. In 1994 the engineer Peter J. Basser published optimized mathematical models of an older diffusion-tensor model. This model is commonly used today and supported by all new MRI-devices.
The DT-MRI technique takes advantage of the fact that the mobility of water molecules in brain tissue is restricted by obstacles like cell membranes. In nerve fibers mobility is only possible alongside the axons. So measuring the diffusion gives rise to the course of the main nerve fibers. All the data of one diffusion-tensor are too much to process in a single image, so there are different techniques for visualization of different aspects of this data: - Cross section images - tractography (reconstruction of main nerve fibers) - tensor glyphs (complete illustration of diffusion-tensor information)
The diffusion manner changes by patients with specific diseases of the central nervous system in a characteristic way, so they can be discerned by the diffusion-tensor technique. Diagnosis of apoplectic strokes and medical research of diseases involving changes of the white matter, like Alzheimer's disease or Multiple sclerosis are the main applications. Disadvantages of DT-MRI are that it is far more time consuming than ordinary MRI and produces large amounts of data, which first have to be visualized by the different methods to be interpreted.
The fMRI (Functional Magnetic Resonance) Imaging is based on the Nuclear magnetic resonance (NMR). The way this method works is the following: All atomic nuclei with an odd number of protons have a nuclear spin. A strong magnetic field is put around the tested object which aligns all spins parallel or antiparallel to it. There is a resonance to an oscillating magnetic field at a specific frequency, which can be computed in dependence on the atom type (the nuclei’s usual spin is disturbed, which induces a voltage s (t), afterwards they return to the equilibrium state). At this level different tissues can be identified, but there is no information about their location. Consequently the magnetic field’s strength is gradually changed, thereby there is a correspondence between frequency and location and with the help of Fourier analysis we can get one-dimensional location information. Combining several such methods as the Fourier analysis it is possible to get a 3D image.
The central idea for fMRI is to look at the areas with increased blood flow. Hemoglobin disturbs the magnetic imaging, so areas with an increased blood oxygen level dependant (BOLD) can be identified. Higher BOLD signal intensities arise from decreases in the concentration of deoxygenated haemoglobin. An fMRI experiment usually lasts 1-2 hours. The subject will lie in the magnet and a particular form of stimulation will be set up and MRI images of the subject's brain are taken. In the first step a high resolution single scan is taken. This is used later as a background for highlighting the brain areas which were activated by the stimulus. In the next step a series of low resolution scans are taken over time, for example, 150 scans, one every 5 seconds. For some of these scans, the stimulus will be presented, and for some of the scans, the stimulus will be absent. The low resolution brain images in the two cases can be compared, to see which parts of the brain were activated by the stimulus. The rest of the analysis is done using a series of tools which correct distortions in the images, remove the effect of the subject moving their head during the experiment, and compare the low resolution images taken when the stimulus was off with those taken when it was on. The final statistical image shows up bright in those parts of the brain which were activated by this experiment. These activated areas are then shown as coloured blobs on top of the original high resolution scan. This image can also be rendered in 3D.
fMRI has moderately good spatial resolution and bad temporal resolution since one fMRI frame is about 2 seconds long. However, the temporal response of the blood supply, which is the basis of fMRI, is poor relative to the electrical signals that define neuronal communication. Therefore, some research groups are working around this issue by combining fMRI with data collection techniques such as electroencephalography (EEG) or magneto encephalography (MEG), which has much higher temporal resolution but rather poorer spatial resolution.
Positron emission tomography, also called PET imaging or a PET scan, is a diagnostic examination that involves the acquisition of physiologic images based on the detection of radiation from the emission of positrons. It is currently the most effective way to check for cancer recurrences. Positrons are tiny particles emitted from a radioactive substance administered to the patient. This radiopharmaceutical is injected to the patient and its emissions are measured by a PET scanner. A PET scanner consists of an array of detectors that surround the patient. Using the gamma ray signals given off by the injected radionuclide, PET measures the amount of metabolic activity at a site in the body and a computer reassembles the signals into images. PET's ability to measure metabolism is very useful in diagnosing Altsheimer's disease, Parkinson's disease, epilepsy and other neurological conditions, because it can precisely illustrate areas where brain activity differs from the norm. It is also one of the most accurate methods available to localize areas of the brain causing epileptic seizures and to determine if surgery is a treatment option. PET is often used in conjunction with an MRI or CT scan through "fusion" to give a full three-dimensional view of an organ.
Electromagnetic Recording MethodsEdit
The methods we have mentioned up to now examine the metabolic activity of the brain. But there are also other cases in which one wants to measure electrical activity of the brain or the magnetic fields produced by the electrical activity. The methods we discussed so far do a great job of identifying where activity is occurring in the brain. A disadvantage of these methods is that they do not measure brain activity on a millisecond-by-millisecond basis. This measuring can be done by electromagnetic recording methods, for example by single-cell recording or the Electroencephalography (EEG). These methods measure the brain activity really fast and over a longer period of time so that they can give a really good temporal resolution.
When using the single-cell method an electrode is placed into a cell of the brain on which we want to focus our attention. Now, it is possible for the experimenter to record the electrical output of the cell that is contacted by the exposed electrode tip. That is useful for studying the underlying ioncurrents which are responsible for the cell’s resting potential. The researchers’ goal is then to determine for example, if the cell responds to sensory information from only specific details of the world or from many stimuli. So we could determine whether the cell is sensitive to input in only one sensory modality or is multimodal in sensitivity. One can also find out which properties of a stimulus make cells in those regions fire. Furthermore we can find out if the animal’s attention towards a certain stimulus influences in the cell’s respond.
Single cell studies are not very helpful for studying the human brain, since it is too invasive to be a common method. Hence, this method is most often used in animals. There are just a few cases in which the single-cell recording is also applied in humans. People with epilepsy sometimes get removed the epileptic tissue. A week before surgery electrodes are implanted into the brain or get placed on the surface of the brain during the surgery to better isolate the source of seizure activity. So using this method one can decrease the possibility that useful tissues will be removed. Due to the limitations of this method in humans there are other methods which measure electrical activity. Those we are going to discuss next.
One of the most famous techniques to study brain activity is probably the Electroencephalography (EEG). Most people might know it as a technique which is used clinically to detect aberrant activity such as epilepsy and disorders.
In an experimental way this technique is used to show the brain activity in certain psychological states, such as alertness or drowsiness. To measure the brain activity mental electrodes are placed on the scalp. Each electrode, also known as lead, makes a recording of its own. Next, a reference is needed which provides a baseline, to compare this value with each of the recording electrodes. This electrode must not cover muscles because its contractions are induced by electrical signals. Usually it is placed at the “mastoid bone” which is located behind the ear.
During the EEG electrodes are places like this. Over the right hemisphere electrodes are labelled with even numbers. Odd numbers are used for those on the left hemisphere. Those on the midline are labelled with a z. The capital letters stands for the location of the electrode(C=central, F=frontal, Fop= frontal pole, O= occipital, P= parietal and T= temporal).
After placing each electrode at the right position, the electrical potential can be measured. This electrical potential has a particular voltage and furthermore a particular frequency. Accordingly, to a person’s state the frequency and form of the EEG signal can differ. If a person is awake, beta activity can be recognized, which means that the frequency is relatively fast. Just before someone falls asleep one can observe alpha activity, which has a slower frequency. The slowest frequencies are called delta activity, which occur during sleep. Patients who suffer epilepsy show an increase of the amplitude of firing that can be observed on the EEG record. In addition EEG can also be used to help answering experimental questions. In the case of emotion for example, one can see that there is a greater alpha suppression over the right frontal areas than over the left ones, in the case of depression. One can conclude from this, that depression is accompanied by greater activation of right frontal regions than of left frontal regions.
The disadvantage of EEG is that the electric conductivity, and therefore the measured electrical potentials vary widely from person to person and, also during time. This is because all tissues (brain matter, blood, bones etc.) have other conductivities for electrical signals. That is why it is sometimes not clear from which exact brain-region the electrical signal comes from.
Whereas EEG recordings provide a continuous measure of brain activity, event-related potentials (ERPs) are recordings which are linked to the occurrence of an event. A presentation of a stimulus for example would be such an event. When a stimulus is presented, the electrodes, which are placed on a person’s scalp, record changes in the brain generated by the thousands of neurons under the electrodes. By measuring the brain's response to an event we can learn how different types of information are processed. Representing the word eats or bake for example causes a positive potential at about 200msec. From this one can conclude, that our brain processes these words 200 ms after presenting it. This positive potential is followed by a negative one at about 400ms. This one is also called N400 (whereas N stands for negative and 400 for the time). So in general one can say that there is a letter P or N to denote whether the deflection of the electrical signal is positive or negative. And a number, which represent, on average, how many hundreds of milliseconds after stimulus presentation the component appears. The event-related- potential shows special interest for researchers, because different components of the response indicate different aspects of cognitive processing. For example, presenting the sentences “The cats won’t eat” and “The cat won’t bake”, the N400 response for the word “eat” is smaller than for the word “bake”. From this one can draw the conclusion that our brain needs 400 ms to register information about a word’s meaning. Furthermore, one can figure out where this activity occurs in the brain, namely if one looks at the position on the scalp of the electrodes that pick up the largest response.
Magnetoencephalography (MEG) is related to electroencephalography (EEG). However, instead of recording electrical potentials on the scalp, it uses magnetic potentials near the scalp to index brain activity. To locate a dipole, the magnetic field can be used, because the dipole shows excellently the intensity of the magnetic field. By using devices called SQUIDs (superconducting quantum interference device) one can record these magnetic fields.
MEG is mainly used to localize the source of epileptic activity and to locate primary sensory cortices. This is helpful because by locating them they can be avoided during neurological intervention. Furthermore, MEG can be used to understand more about the neurophysiology underlying psychiatric disorders such as schizophrenia. In addition, MEG can also be used to examine a variety of cognitive processes, such as language, object recognition and spatial processing among others, in people who are neurologically intact.
MEG has some advantages over EEG. First, magnetic fields are less influenced than electrical currents by conduction through brain tissues, cerebral spinal fluid, the skull and scalp. Second, the strength of the magnetic field can tell us information about how deep within the brain the source is located. However, MEG also has some disadvantages. The magnetic field in the brain is about 100 million times smaller than that of the earth. Due to this, shielded rooms, made out of aluminum, are required. This makes MEG more expensive. Another disadvantage is that MEG cannot detect activity of cells with certain orientations within the brain. For example, magnetic fields created by cells with long axes radial to the surface will be invisible.
Techniques for Modulating Brain ActivityEdit
History: Transcranial magnetic stimulation (TMS) is an important technique for modulating brain activity. The first modern TMS device was developed by Antony Baker in the year 1985 in Sheffield after 8 years of research. The field has developed rapidly since then with many researchers using TMS in order to study a variety of brain functions. Today, researchers also try to develop clinical applications of TMS, because there are long lasting effects on the brain activity it has been considered as a possible alternative to antidepressant medication.
Method: UMTS utilizes the principle of electromagnetic induction to an isolated brain region. A wire-coil electromagnet is held upon the fixed head of the subject. When inducing small, localized, and reversible changes in the living brain tissue, especially the directly under laying parts of the motor cortex can be effected. By altering the firing-patterns of the neurons, the influenced brain area is disabled. The repetitive TMS (rTMS) describes, as the name reveals, the application of many short electrical stimulations with a high frequency and is more common than TMS. The effects of this procedure last up to weeks and the method is in most cases used in combination with measuring methods, for example: to study the effects in detail.
Application: The TMS-method gives more evidence about the functionality of certain brain areas than measuring methods on their own. It was a very helpful method in mapping the motor cortex. For example: While rTMS is applied to the prefrontal cortex, the patient is not able to build up short term memory. That determines the prefrontal cortex, to be directly involved in the process of short term memory. By contrast measuring methods on their own, can only investigate a correlation between the processes. Since even earlier researches were aware that TMS could cause suppression of visual perception, speech arrest, and paraesthesias, TMS has been used to map specific brain functions in areas other than motor cortex. Several groups have applied TMS to the study of visual information processing, language production, memory, attention, reaction time and even more subtle brain functions such as mood and emotion. Yet long time effects of TMS on the brain have not been investigated properly, Therefore experiments are not yet made in deeper brain regions like the hypothalamus or the hippocampus on humans. Although the potential utility of TMS as a treatment tool in various neuropsychiatric disorders is rapidly increasing, its use in depression is the most extensively studied clinical applications to date. For instance in the year 1994, George and Wassermann hypothesized that intermittent stimulation of important prefrontal cortical brain regions might also cause downstream changes in neuronal function that would result in an antidepressant response. Here again, the methods effects are not clear enough to use it in clinical treatments today. Although it is too early at this point to tell whether TMS has long lasting therapeutic effects, this tool has clearly opened up new hopes for clinical exploration and treatment of various psychiatric conditions. Further work in understanding normal mental phenomena and how TMS affects these areas appears to be crucial for advancement. A critically important area that will ultimately guide clinical parameters is to combine TMS with functional imaging to directly monitor TMS effects on the brain. Since it appears that TMS at different frequencies has divergent effects on brain activity, TMS with functional brain imaging will be helpful to better delineate not only the behavioral neuropsychology of various psychiatric syndromes, but also some of the pathophysiologic circuits in the brain.
transcranial Direct Current Stimulation: The principle of tDCS is similar to the technique of TMS. Like TMS this is a non-invasive and painless method of stimulation. The excitability of brain regions is modulated by the application of a weak electrical current.
History and development: It was first observed that electrical current applied to the skull lead to an alleviation of pain. Scribonius Largus, the court physician to the Roman emperor Claudius, found that the current released by the electric ray has positive effects on headaches. In the Middle Ages the same property of another fish, the electrical catfish, was used to treat epilepsy. Around 1800, the so-called galvanism (it was concerned with effects of today’s electrophysiology) came up. Scientists like Giovanni Aldini experimented with electrical effects on the brain. A medical application of his findings was the treatment of melancholy. During the twentieth century among neurologists and psychiatrists electrical stimulation was a controversial but nevertheless wide spread method for the treatment of several kinds of mental disorders (e.g. Electroconvulsive therapy by Ugo Cerletti).
Mechanism: The tDCS works by fixation of two electrodes on the skull. About 50 percent of the direct current applied to the skull reaches the brain. The current applied by a direct current battery usually is around 1 to 2 mA. The modulation of activity of the brain regions is dependent on the value of current, on the duration of stimulation and on the direction of current flow. While the former two mainly have an effect on the strength of modulation and its permanence beyond the actual stimulation, the latter differentiates the modulation itself. The direction of the current (anodic or cathodic) is defined by the polarity and position of the electrodes. Within tDCS two distinct ways of stimulation exist. With the anodal stimulation the anode is put near the brain region to be stimulated and analogue for the cathodal stimulation the cathode is placed near the target region. The effect of the anodal stimulation is that the positive charge leads to depolarization in the membrane potential of the applied brain regions, whereas hyperpolarisation occurs in the case of cathodal stimulation due to the negative charge applied. The brain activity thereby is modulated. Anodal stimulation leads to a generally higher activity in the stimulated brain region. This result can also be verified with MRI scans, where an increased blood flow in the target region indicates a successful anodal stimulation.
Applications: From the description of the TMS method it is should be obvious that there are various fields of appliances. They reach from identifying and pulling together brain regions with cognitive functions to the treatment of mental disorders. Compared to TMS it is an advantage of tDCS to not only is able to modulate brain activity by decreasing it but also to have the possibility to increase the activity of a target brain region. Therefore the method could provide an even better suitable treatment of mental disorders such as depression. The tDSC method has also already proven helpful for apoplectic stroke patients by advancing the motor skills.
Besides using methods to measure the brain’s physiology and anatomy, it is also important to have techniques for analyzing behaviour in order to get a better insight on cognition. Compared to the neuroscientific methods, which concentrate on neuronal activity of the brain regions, behavioural methods focus on overt behaviour of a test person. This can be realized by well defined behavioural methods (e.g. eye-tracking), test batteries (e.g. IQ-test) or measurements which are designed to answer specific questions concerning the behaviour of humans. Furthermore, behavioural methods are often used in combination with all kinds of neuroscientific methods mentioned above. Whenever there is an overt reaction on a stimulus (e.g. picture) these behavioural methods can be useful. Another goal of a behavioural test is to examine in what terms damage of the central nervous system influences cognitive abilities.
A Concept of a behavioural testEdit
The tests are performed to give an answer to certain questions about human behaviour. In order to find an answer to that question, a test strategy has to be developed. First it has to be carefully considered, how to design the test in the best way, so that the measurement results provide an accurate answer to the initial question. How can the test be conducted so that founding variables are minimal and the focus really is on the problem? When an appropriate test arrangement is found, the defining of test variables is the next part. The test is now conducted and probably repeated until a sufficient amount of data is collected. The next step is the evaluation of the resulting data, with the suitable methods of statistics. If the test reveals a significant result, it might be the case that further questions arise about neuronal activity underlying the behaviour. Then neuroscientific methods are useful to investigate correlating brain activities. Methods, which proved to provide good evidence to a certain recurrent question about cognitive abilities of subjects, can bring together in a test battery.
Example: Question: Does a noisy surrounding affect the ability to solve a certain problem?
Possible test design: Expose half of the subject to a silent environment while solving the same task as the other half in a noisy environment. In this example founding variables might be different cognitive abilities of the participants. Test variables could be the time needed to solve the problem and the loudness of the noise etc. If statistical evaluation shows significance: Probable further questions: How does noise affect the brain activities on a neuronal level?
Are you interested in doing a behavioural test on your own, visit: the socialpsychology.org website.
A neuropsychological assessment can be achieved through the test battery approach, which gives an overview on a person’s cognitive strengths and weaknesses by analyzing different cognitive abilities. A neuropsychological test battery is used by neurophysiologists to discover brain dysfunctions, arisen from neurological or psychiatric disorders. Such batteries do not only test various mental functions, but also the overall intelligence of a person.
The purpose of the following batteries is to find out, whether a person suffers from a brain damage or not, and they work well in discriminating persons with brain damage from neurologically impaired patients, but worse when it comes to discriminating them from persons with psychiatric disorders. The Halstead-Reitan battery is the most popular one, where the abilities tested range from basic sensory processing to tests that require complex reasoning. Furthermore, the Halstead- Reitan battery gives information concerning what caused the damage, the brain areas that were harmed, and it provides information about the stage the damage has reached. Such information is very helpful for the development of a rehabilitation program. Another test battery, the Luria-Nebraska battery, is as twice as fast to administer than the Halstead-Reitan, and the tests are ordered according to twelve content scales (e.g. motor functions, reading, memory etc.). These test batteries do not only focus on the data results, which assesses the absolute level of performance, but beyond that, they give attention to data on the qualitative manner of performance, and this is useful in gaining a better understanding of the cognitive impairment.
Another example for test batteries is the determination of intelligence (IQ-test). The most common used tests to estimate the intelligence of a person are the Wechsler family intelligence tests. Here is an example for one of them: The WAIS-III test, in which various cognitive abilities of children between 6 and 16 years are tested. Firstly, the verbal-comprehension index, which is assessed according to performance on vocabulary, similarities and information, secondly, the perceptualcortex index analyzing non-verbal abilities (e.g. visual-motor integration), thirdly, the workingmemory index being evaluated according to a person’s digit span, arithmetical performance andobject assembly subtests, at last there is the processing-speed index according to digit symbol coding and letter-number sequencing.
The Eye Tracking ProcedureEdit
Another important procedure for analyzing behavior and cognition is Eye-tracking. This is a procedure of measuring either where we are looking (the point of gaze) or the motion of an eye relative to the head. There are different techniques for measuring the movement of the eyes and the instrument that does the tracking is called the tracker. The first non-intrusive tracker was invented by George Buswell.
The eye tracking is a study with a long history, starting back in the 1800s. In 1879 Louis Emile Javal noticed that reading does not involve smooth sweeping of the eye along the text but rather series of short stops which are called fixations. This observation is one of the first attempts to examine the eye’s directions of interest. The book of Alfred L. Yarbus which he published in 1967 after an important eye tracking research is one of the most quoted eye tracking publications ever. The eye tracking procedure is not that complicated. Video based eye trackers are frequently used. A camera focuses on one or both eyes and records the movements while the viewer looks at some stimulus. The most modern eye trackers use contrast to locate the center of the pupil and create corneal reflections using infrared or near-infrared non-collimated light.
There are also two general types of eye tracking techniques. The first one – the Bright Pupil is an effect close to the red eye effect and it appears when the illuminator source is onset from the optical path while when the source is offset from the optical path, the pupil appears to be dark (Dark Pupil). The Bright Pupil creates great contrast between the iris and the pupil which allows tracking in light conditions from dark to very bright but it is not effective for outdoor tracking. There are also different eye tracking setup techniques. Some are head mounted, some require the head to be stable, and some automatically track the head during motion. The sampling rate of the most of them is 30 Hz. But when we have rapid eye movement, for example during reading, the tracker must run at 240, 350 or even 1000-1250 Hz in order to capture the details of the movement. Eye movements are divided to fixations and saccades. When the eye movement pauses in a certain position there is a fixation and saccade when it moves to another position. The resulting series of fixations and saccades is called a scan path. Interestingly most information from the eye is received during a fixation and not during a saccade. Fixation lasts about 200 ms during reading a text and about 350 ms during viewing of a scene and a saccade towards new goal takes about 200 ms. Scan paths are used in analyzing cognitive intent, interest and salience.
Eye tracking has a wide range of application – it is used to study a variety of cognitive processes, mostly visual perception and language processing. It is also used in human-computer interactions. It is also helpful for marketing and medical research. In recent years the eye tracking has generated a great deal of interest in the commercial sector. The commercial eye tracking studies present a target stimulus to consumers while a tracker is used to record the movement of the eye. Some of the latest applications are in the field of the automotive design. Eye tracking can analyze a driver’s level of attentiveness while driving and prevent drowsiness from causing accidents.
Another major method, which is used in cognitive neuroscience, is the use of neural networks (computer modelling techniques) in order to simulate the action of the brain and its processes. These models help researchers to test theories of neuropsychological functioning and to derive principles viewing brain-behaviour relationships.
In order to simulate mental functions in humans, a variety of computational models can be used. The basic component of most such models is a “unit”, which one can imagine as showing neuron-like behaviour. These units receive input from other units, which are summed to produce a net input. The net input to a unit is then transformed into that unit’s output, mostly utilizing a sigmoid function. These units are connected together forming layers. Most models consist of an input layer, an output layer and a “hidden” layer as you can see on the right side. The input layer simulates the taking up of information from the outside world, the output layer simulates the response of the system and the “hidden” layer is responsible for the transformations, which are necessary to perform the computation under investigation. The units of different layers are connected via connection weights, which show the degree of influence that a unit in one level has on the unit in another one.
The most interesting and important about these models is that they are able to "learn" without being provided specific rules. This ability to “learn” can be compared to the human ability e.g. to learn the native language, because there is nobody who tells one “the rules” in order to be able to learn this one. The computational models learn by extracting the regularity of relationships with repeated exposure. This exposure occurs then via “training” in which input patterns are provided over and over again. The adjustment of “the connection weights between units” as already mentioned above is responsible for learning within the system. Learning occurs because of changes in the interrelationships between units, which occurrence is thought to be similar in the nervous system.
- Filler, AG: The history, development, and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, DTI: Nature Precedings DOI: 10.1038/npre.2009.3267.4.
- Ward, Jamie (2006) The Student's Guide to Cognitive Neuroscience New York: Psychology Press
- Banich,Marie T. (2004). Cognitive Neurosciene and Neuropsychology. Housthon Mifflin Company. ISBN 0618122109
- Gazzangia, Michael S.(2000). Cognitive Neuroscience. Blackwell Publishers. ISBN 0631216596
- 27.06.07 Sparknotes.com
- (1) 4 April 2001 / Accepted: 12 July 2002 / Published: 26 June 2003 Springer-Verlag 2003. Fumiko Maeda • Alvaro Pascual-Leone. Transcranial magnetic stimulation: studying motor neurophysiology of psychiatric disorders
- (2) a report by Drs Risto J Ilmoniemi and Jari Karhu Director, BioMag Laboratory, Helsinki University Central Hospital, and Managing Director, Nexstim Ltd
- (3) Repetitive Transcranial Magnetic Stimulation as Treatment of Poststroke Depression: A Preliminary Study Ricardo E. Jorge, Robert G. Robinson, Amane Tateno, Kenji Narushima, Laura Acion, David Moser, Stephan Arndt, and Eran Chemerinski
- Moates, Danny R. An Introduction to cognitive psychology. B:HRH 4229-724 0 |
- Volcano List
- Learn More
- All About Volcanoes
- Kids Only!
- Adventures and Fun
- Sitemap (Under Construction)
This photo shows the large white billowing eruption plume from Rabaul being carried in a westerly direction by the weak prevailing winds. At the base of the eruption column is a layer of yellow-brown ash being distributed by lower level winds. A sharp boundary moving outward from the center of the eruption in the lower cloud is a pulse of laterally-moving ash which results from a volcanic explosion. Image taken on 09/29/94 from STS-64 (STS64-116-064). Information Source: Shuttle Images at the Johnson Space Center in Houston, Texas.
There are two things to think about. The first is how the weather near an erupting volcano is being affected. The second is how large eruptions will affect the weather/climate around the world. I think more people are worried about the second issue than the first.
The main effect on weather right near a volcano is that there is often a lot of rain, lightning, and thunder during an eruption. This is because all the ash particles that are thrown up into the atmosphere are good at attracting/collecting water droplets. We don’t quite know exactly how the lightning is caused but it probably involves the particles moving through the air and separating positively and negatively charged particles.
Another problem in Hawai’i involves the formation of vog, or volcanic fog. The ongoing eruption there is very quiet, with lava flowing through lava tubes and then into the ocean. Up at the vent is an almost constant plume of volcanic fume that contains a lot of sulfur dioxide. This SO2 combines with water in the atmosphere to form sulfuric acid droplets that get carried in the trade winds around to the leeward side of the Big Island. The air quality there has been really poor since the eruption started in 1983 and they are getting pretty tired of it.
As for the world-wide affects of volcanic eruptions this only happens when there are large explosive eruptions that throw material into the stratosphere. If it only gets into the troposphere it gets flushed out by rain.
The effects on the climate haven’t been completely figured out. It seems to depend on the size of the particles (again mostly droplets of sulfuric acid). If they are big then they let sunlight in but don’t let heat radiated from the Earth’s surface out, and the net result is a warmer Earth (the famous Greenhouse effect). If the particles are smaller than about 2 microns then they block some of the incoming energy from the Sun and the Earth cools off a little. That seems to have been the effect of the Pinatubo eruption where about a 1/2 degree of cooling was noticed around the world. Of course that doesn’t just mean that things are cooler, but there are all kinds of effects on the wind circulation and where storms occur.
An even more controversial connection involves whether or not volcanic activity on the East Pacific Rise (a mid-ocean spreading center) can cause warmer water at the surface of the East Pacific, and in that way generate an El Nino. Dr. Dan Walker here at the University of Hawai’i has noticed a strong correlation between seismic activity on the East Pacific Rise (which he presumes indicates an eruption) and El Nino cycles over the past ~25 years.
As a long-term average, volcanism produces about 5X10^11 kg of CO2 per year; that production, along with oceanic and terrestrial biomass cycling maintained a carbon dioxide reservoir in the atmosphere of about 2.2X10^15 kg. Current fossil fuel and land use practices now introduce about a (net) 17.6X10^12 kg of CO2 into the atmosphere and has resulted in a progressively increasing atmospheric reservoir of 2.69X10^15 kg of CO2. Hence, volcanism produces about 3% of the total CO2 with the other 97% coming from anthropogenic sources. For more detail, see Morse and Mackenzie, 1990, Geochemistry of Sedimentary Carbonates.
To get into the scientific literature read chapter 17 of Peter Francis’ excellent book, “Volcanoes – A Planetary Perspective.” The 33 references give you the important articles from Ben Franklin to the early 1990s. Good reading!
Thanks to Scott Rowland, Chuck Wood, and Don Thomas for writing this answer! I should note that this was written a few years back and a lot of work has focused on this area recently. Volcano World will be running a special feature on volcanism and climate this summer! |
Douglass Publishes Narrative of the Life of Frederick Douglass
Douglass' best-known work is his first autobiography Narrative of the Life of Frederick Douglass, an American Slave, published in 1845.
At the time, some skeptics attacked the book and questioned whether a black man could have produced such an eloquent piece of literature. The book received generally positive reviews and it became an immediate bestseller. Within three years of its publication, the autobiography had been reprinted nine times with 11,000 copies circulating in the United States; it was also translated into French and Dutch and published in Europe.
In 1845, just seven years after his escape from slavery, the young Frederick Douglass published this powerful account of his life in bondage and his triumph over oppression. The book, which marked the beginning of Douglass’s career as an impassioned writer, journalist, and orator for the abolitionist cause, reveals the terrors he faced as a slave, the brutalities of his owners and overseers, and his harrowing escape to the North. It has become a classic of American autobiography. |
Obsessive-compulsive Disorder Overview Treatment Support Discuss Obsessive-compulsive disorder (OCD) is characterized by repetitive, unwanted, intrusive thoughts (obsessions) and irrational, excessive urges to do certain actions (compulsions). Although people with OCD may know that their thoughts and behavior don't make sense, they are often unable to stop them. Symptoms typically begin during childhood, the teenage years or young adulthood, although males often develop them at a younger age than females. 1.2% of U.S. adults experience OCD each year. Symptoms Most people have occasional obsessive thoughts or compulsive behaviors. In an obsessive-compulsive disorder, however, these symptoms generally last more than an hour each day and interfere with daily life. Obsessions are intrusive, irrational thoughts or impulses that repeatedly occur. People with these disorders know these thoughts are irrational but are afraid that somehow they might be true. These thoughts and impulses are upsetting, and people may try to ignore or suppress them. Examples of obsessions include: Thoughts about harming or having harmed someone Doubts about having done something right, like turning off the stove or locking a door Unpleasant sexual images Fears of saying or shouting inappropriate things in public Compulsions are repetitive acts that temporarily relieve the stress brought on by an obsession. People with these disorders know that these rituals don't make sense but feel they must perform them to relieve the anxiety and, in some cases, to prevent something bad from happening. Like obsessions, people may try not to perform compulsive acts but feel forced to do so to relieve anxiety. Examples of compulsions include: Hand washing due to a fear of germs Counting and recounting money because a person is can't be sure they added correctly Checking to see if a door is locked or the stove is off "Mental checking" that goes with intrusive thoughts is also a form of compulsion Causes The exact cause of obsessive-compulsive disorder is unknown, but researchers believe that activity in several portions of the brain is responsible. More specifically, these areas of the brain may not respond normally to serotonin, a chemical that some nerve cells use to communicate with each other. Genetics are thought to be very important. If you, your parent or a sibling, have obsessive-compulsive disorder, there's around a 25% chance that another immediate family member will have it. Diagnosis A doctor or mental health care professional will make a diagnosis of OCD. A general physical with blood tests is recommended to make sure the symptoms are not caused by illicit drugs, medications, another mental illness, or by a general medical condition. The sudden appearance of symptoms in children or older people merits a thorough medical evaluation to ensure that another illness is not causing of these symptoms. To be diagnosed with OCD, a person must have must have: Obsessions, compulsions or both Obsessions or compulsions that are upsetting and cause difficulty with work, relationships, other parts of life and typically last for at least an hour each day Treatment A typical treatment plan will often include both psychotherapy and medications, and combined treatment is usually optimal. Medication, especially a type of antidepressant called a selective serotonin reuptake inhibitor (SSRI), is helpful for many people to reduce the obsessions and compulsions. Psychotherapy is also helpful in relieving obsessions and compulsions. In particular,cognitive behavior therapy (CBT) and exposure and response therapy (ERT) are effective for many people. Exposure response prevention therapy helps a person tolerate the anxiety associated with obsessive thoughts while not acting out a compulsion to reduce that anxiety. Over time, this leads to less anxiety and more self-mastery. Though OCD cannot be cured, it can be treated effectively. Read more on our treatment page. Related Conditions There are related conditions that share some characteristics with OCD but are considered separate conditions. Body Dysmorphic Disorder. This disorder is characterized by an obsession with physical appearance. Unlike simple vanity, BDD is characterized by obsessing over one's appearance and body image, often for many hours a day. Any perceived flaws cause significant distress and ultimately impede on the person's ability to function. In some extreme cases, BDD can lead to bodily injury either due to infection because of skin picking, excessive exercise, or from having unnecessary surgical procedures to change one’s appearance. Hoarding Disorder. This disorder is defined by the drive to collect a large amount of useless or valueless items, coupled with extreme distress at the idea of throwing anything away. Over time, this situation can render a space unhealthy or dangerous to be in. Hoarding disorder can negatively impact someone emotionally, physically, socially and financially, and often leads to distress and disability. In addition, many hoarders cannot see that their actions are potentially harmful, and so may resist diagnosis or treatment. Trichotillomania. Many people develop unhealthy habits such as nail biting or teeth grinding, especially during periods of high stress. Trichotillomania, however, is the compulsive urge to pull out (and possibly eat) your own hair, including eyelashes and eyebrows. Some people may consciously pull out their hair, while others may not even be aware that they are doing it. Trichotillomania can create serious injuries, such as repetitive motion injury in the arm or hand, or, if the hair is repeatedly swallowed, the formation of hairballs in the stomach, which can be life threatening if left untreated. A similar illness is excoriation disorder, which is the compulsive urge to scratch or pick at the skin. |
Fifty years ago, people of the world held their collective breath as Apollo 11 landed humans on the Moon for the first time. Before 20 July 1969 there were several early attempts to travel through space using accurate illustration, resulting in ground breaking astronomical photography, deceptive imitation and an infamous hoax.
Enduring fascination with the Moon pre-dates a Newtonian telescope’s ability to bring the lunar surface within the grasp of the astronomical artist. Illustrations by Robert Hooke FRS (1635-1703), in his book Micrographia, provide a 1665 interpretation of lunar geology. Hooke constructed clay models based on his own lunar observations, then simulated distinctive craters, later attributing their creation to volcanic activity.
Sir John Herschel FRS (1792-1871), son of astronomer William Herschel FRS (1738-1822), continued exploration of the heavens, becoming a founding member of the Astronomical Society of London in 1820. Within his voluminous correspondence held at the Royal Society, there is an exchange of celestial minds in 1858, illuminating an age of Victorian lunar exploration. Warren de la Rue FRS (1815-1889) shared Herschel’s twin passions of astronomy and photography and their letters contain details of a breakthrough in virtual space travel, 111 years before the Eagle landed in 1969.
Herschel’s encounters with the Moon had not always been positive. In 1835 he became the innocent victim of what became known as the Great Moon Hoax, published by New York newspaper The Sun. A series of articles, written without Herschel’s knowledge, linked him with the discovery of life on the Moon, initially amusing but later irritating him.
The announcement of the invention of photography in 1839 triggered a space race of its own. In March 1840 Dr John William Draper of New York University announced that he had succeeded in getting ‘a representation of the Moon’s surface by the Daguerreotype.’
John Adams Whipple captured a full Moon in 1849. He used daguerreotype photography through the telescope at the Harvard College Observatory in Cambridge MA, in collaboration with William Cranch Bond and George Phillips Bond. A Daguerreotype of the Moon by Whipple was subsequently exhibited at the Great Exhibition in 1851 to critical acclaim.
James Nasmyth (1808-1890) produced captivating lunar images in sublime detail. But Nasmyth’s photographs were an illusion, faithful reproductions of intricate plaster models formed using his own astronomical observations. Nasmyth continued Hooke’s 1665 experiments, observing through a telescope, drawing in detail then moulding in plaster to form a three dimensional representation of the lunar surface.
Lunar imaging was not an exclusively male preserve. A contemporary of Robert Hooke, German astronomer and artist Maria Clara Eimmart (1676-1707), created illustrations of celestial phenomena including the Moon in support of her father’s astronomical observations. Early photographer Thereza Dillwyn Llewelyn (1834-1926) echoed Maria’s interest in astronomical illustration, joining her father John Dillwyn Llewelyn FRS (1810-1882) in celestial photographic exploration. From their observatory in South Wales within the grounds of the family home at Penllergare, father and daughter photographed the Moon successfully in 1858.
Astrophotography presented a significant challenge to capture a moving object in low light. Warren de la Rue dedicated many nights to the pursuit of the Moon using a clockwork driven equatorial reflecting telescope. He engaged the services of Robert Howlett (1831-1858), a professional photographer who owned both Gregorian and Newtonian telescopes. Howlett’s practical knowledge of increased sensitivity in photochemistry enabled De la Rue to capture an image of the Moon in only ten seconds by 1857.
Not content with single images of the lunar surface, De la Rue harnessed stereoscopic photography to create a representation of the Moon in three dimensions. This necessitated two photographs taken at the same lunar phase but at different libration, to create a slight visual difference between the two images. The images could be taken months apart, but were sometimes obscured by a cloudy sky when the ideal phase and libration was reached.
In October 1858 De la Rue sent a stereoscopic photograph of the Moon to Sir John Herschel, who commented on its ‘transcendent and wonderful effect … as if a giant with eyes some thousands of miles apart looked at the Moon through a binocular’. De la Rue quoted Sir John’s remark in his 1859 ‘Report on the present state of Celestial Photography in England’, for the British Association for the Advancement of Science.
The unique contribution of Warren de la Rue to astronomy was recalled by Royal Astronomical Society President John Lee FRS (1783-1866) in 1862. Lee observed that the enlarged copies, viewed stereoscopically, had ‘brought to light details of dykes, and terraces, and furrows and undulations of the lunar surface, of which no certain knowledge had previously existed…’.
Warren de la Rue’s pioneering work was exhibited in November 1858 at the Royal Astronomical Society. Howlett’s glass positive photographs enlarged from De la Rue’s negatives, backlit and mounted on a bespoke reflecting stereoscope, displayed the Moon in Victorian virtual reality over a century before Apollo 11 blasted off. The Royal Astronomical Society reported that ‘The appearance of rotundity over the whole surface of the Moon is perfect; and parts which are as plain surfaces in the single photograph in the stereoscope present the most remarkable undulations and irregularities.’
De la Rue and Howlett’s brief collaboration ended one month later with the young photographer’s tragic death. But their symbiotic contribution to astronomy has been preserved in a rare stereoscopic photograph of two full Moons.
It was one small shot of the Moon, one giant leap for photography. |
The whole place is so still, gloomy, and desolate, that it goes by the name of the “Great Dismal Swamp,” and you see we have here what might well be the beginning of a bed of coal; for we know that peat when dried becomes firm and makes an excellent fire, and that if it were pressed till it was hard and solid it would not be unlike coal. If, then, we can explain how this peaty bed has been kept pure from earth, we shall be able to understand how a coal-bed may have been formed, even though the plants and trees which grow in this swamp are different from those which grew in the coal-forests.
The explanation is not difficult; streams flow constantly, or rather ooze into the Great Dismal Swamp from the land that lies to the west, but instead of bringing mud in with them as rivers bring to the sea, they bring only clear, pure water, because, as they filter for miles through the dense jungle of reeds, ferns, and shrubs which grow round the marsh, all the earth is sifted out and left behind. In this way the spongy mass of dead plants remains free from earthy grains, while the water and the shade of the thick forest of trees prevent the leaves, stems, etc., from being decomposed by the air and sun. And so year after year as the plants die they leave their remains for other plants to take root in, and the peaty mass grows thicker and thicker, while tall cedar trees and evergreens live and die in these vast, swampy forests, and being in loose ground are easily blown down by the wind, and leave their trunks to be covered up by the growing moss and weeds.
Now we know that there were plenty of ferns and of large Calamites growing thickly together in the coal-forests, for we find their remains everywhere in the clay, so we can easily picture to ourselves how the dense jungle formed by these plants would fringe the coal-swamp, as the present plants do the Great Dismal Swamp, and would keep out all earthy matter, so that year after year the plants would die and form a thick bed of peat, afterwards to become coal.
The next thing we have to account for is the bed of shale or hardened clay covering over the coal. Now we know that from time to time land has gone slowly up and down on our globe so as in some places to carry the dry ground under the sea, and in others to raise the sea-bed above the water. Let us suppose, then, that the great Dismal Swamp was gradually to sink down so that the sea washed over it and killed the reeds and shrubs. Then the streams from the west would not be sifted any longer but would bring down mud, and leave it, as in the delta of the Nile or Mississippi, to make a layer over the dead plants. You will easily understand that this mud would have many pieces of dead trees and plants in it, which were stifled and died as it covered them over; and thus the remains would be preserved like those which we find now in the roof of the coal-galleries. |
Perspicuity... The English Language in Cyberspace
Students learn some style problems present in the English Language through this Website based reading.
3 Views 0 Downloads
- Activities & Projects
- Graphics & Images
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Writing Prompts
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
English Language and Composition: Sex Education in Schools
Although designed for the essay portion of AP English Language and Composition exam, this exercise provides an excellent opportunity for learners to practice using information found in primary source documents to support an argument.
11th - 12th English Language Arts
Welcome to the Color Vowel Chart
Focus English language learners' attention on word stress and phrase stress with a pronunciation chart that breaks the sounds into moving and non-moving vowel sounds. The chart tool uses colors and key words to indicate where to put the...
4th - 12th English Language Arts CCSS: Adaptable
Language Focus and Vocabulary Unit Three: Present Perfect
Give your learners the tools and vocabulary to talk about the Internet and related technology. There are six exercises on this sheet that focus on the present perfect and technology-related vocabulary. English language learners fill in...
6th - 12th English Language Arts CCSS: Adaptable
Beginner Level Thanksgiving ESL Lesson Plan
Thanksgiving is a cherished tradition in the United States and Canada. Introduce the beginnings of the Thanksgiving celebration with a resource that features reading comprehension activities, vocabulary exercises, and a short writing...
4th - 8th English Language Arts CCSS: Adaptable
A Brief History of the English Language
Modern English isn't just a band, it's a stop along the path to the English language. The presentation begins with a look at the origins of old, middle, and modern English. It covers the invasions or influences that shaped each period of...
10th - 12th Social Studies & History
Plath, Personification, and Figurative Language in "Mirror"
What will your class members see in Sylvia Plath's "Mirror"? After reading the poem, learners engage in a Socratic seminar prompted by the provided questions. Individuals then create an illustration, focusing on the personification and...
11th - 12th English Language Arts CCSS: Designed
New Review English Placement Test
Ninety multiple choice questions make up an interactive test designed to examine scholars' English language proficiency. After completing , learners have the chance to view their results, retest, or preview an answer key.
9th - Higher Ed English Language Arts CCSS: Adaptable
New Review How Many Verb Tenses Are There in English?
You don't need a time machine to travel through time—you just need the English language! Explore the 12 possible ways to describe something that happened, something that is still happening, or something that will happen with an engaging...
4 mins 6th - 12th English Language Arts CCSS: Adaptable
A “New English” in Chinua Achebe’s “Things Fall Apart”: A Common Core Exemplar
To examine the “New English” Chinua Achebe uses in Things Fall Apart, readers complete a series of worksheets that ask them to examine similes, proverbs, and African folktales contained in the novel. Individuals explain the meaning...
9th - 12th English Language Arts CCSS: Designed |
HTML forms are special kind of HTMLpage that can be used to pass data to a server. Once the server gets the data, it may manipulate the data and send some of it back, or it may store it into a database for later use.
An HTML form will contain input elements like lables, text fields, check boxes, radio-select-buttons, submit buttons, and more. A form can also present lists, from which the user can also present lists, from which the user can make a selection, or a text area where multi-line typing is allowed.
The basic structure of a form is as follows:
The form tags go inside the <body> tag. The data in the form is sent to the page specified in the form’s action attribute. The file defined in the action attribute usually does something with the received input :
<form name="form_name" action="postSignup.php" method="get">
HTML Form Actions & Methods
When you define a form, there are two required attributes: action and method.
The action attribute (action=) indicates the name of the file that the form will be submitted to. The method attributes (method=) specifies how the form will be submitted.
The defined in the action attribute usually does something with the received input, like put it into a database or send back some of the values to the user. Here’s an example of a simple form with action and method attributes.
<form name=”input” action=”form_action.php” method=”post”>
Full Name : <input type=”text” name=”fullname” />
<input type=”submit” value=”Submit”>
The Input tag
The most common form element is the <input> element, which is used to collect information from the user. An <input> element has several variations, which depend on the type attribute. An <input> element , so you can refer to it later.
In general, the sysntax is :
<input type="type" name="name" />
An <input> element can be of type text, checkbox, password, radio button , submit button, and more. The common types are described.
<input type=”text”> defines a one-line input field that a user can enter text into:
<input type="text" name="firstname" /><br />
<input type="text" name="lastname" />
<input type=”password”> defines password field. The password field is just like the text field, except the text that is typed in is not displayed on the screen.
Password: <input type="password" name="userpwd" />
Note that a password field doesn’t secure the data, it only hides it on the screen.
<input type=”radio”> defines a radio button. Radio buttons let a user select one (and only one) of a limited number of presented choices :
Pick your favorite color :<br />
<input type="radio" name="color" value="red" />Red <br />
<input type="radio" name="color" value="green" />Green <br />
<input type="radio" name="color" value="Blue" />Blue
<input type=”checkbox”. defines a checkbox. Checkboxes let a user select ZERO or MORE options of a limited number of choices.
<input type="checkbox" name="vehicle" value="Bike" />I have a bike <br />
<input type="checkbox" name="vehicle" value="Car" />I have a Car <br />
<input type=”submit”> defines the submit button.
A submit button is used when the user has filled in the form, and is ready to send (“submit”) the data they entered to the server. The data is sent to the page specified in the form’s action attribute. |
Formally known as the World Commission on Environment and Development (WCED), the Brundtland Commission's mission is to unite countries to pursue sustainable development together. The Chairman of the Commission, Gro Harlem Brundtland, was appointed by Javier Pérez de Cuéllar, former Secretary General of the United Nations, in December 1983. At the time, the UN General Assembly realized that there was a heavy deterioration of the human environment and natural resources. To rally countries to work and pursue sustainable development together, the UN decided to establish the Brundtland Commission. Gro Harlem Brundtland was the former Prime Minister of Norway and was chosen due to her strong background in the sciences and public health. The Brundtland Commission officially dissolved in December 1987 after releasing Our Common Future, also known as the Brundtland Report, in October 1987, a document which coined, and defined the meaning of the term "Sustainable Development". Our Common Future won the University of Louisville Grawemeyer Award in 1991. The organization Center for Our Common Future was started in April 1988 to take the place of the Commission.
- 1 History
- 2 Modern definition of sustainable development
- 3 Brundtland Report
- 4 Structure
- 5 Sustainability Efforts
- 6 Members of the Commission
- 7 See also
- 8 References
Ten years after the 1972 United Nations Conference on the Human Environment|Stockholm Conference on the Human Environment most of the global environmental challenges had clearly not been adequately addressed. In several ways, these challenges had grown. Particularly, the underlying problem of how to reduce poverty in low-income countries through more productive and industrialized economy without, in the process, exacerbating the global and local environmental burdens, remained unresolved. Neither high-income countries in the North nor low-income countries in the South were willing to give up an economic development based on growth, but environmental threats, ranging from pollution, acid rain, deforestation and desertification, the destruction of the ozone layer, to early signs of climate change, were impossible to overlook and increasingly unacceptable. There was a tangible need for a developmental concept that would allow reconciling economic development with environmental protection. Views differed on several questions: were local environmental problems the result of local developments or of a global economic system that forced particularly low-income countries to destroy their environmental basis? Did environmental burdens result mainly from destructive economic growth-based development or from a lack of economic development and modernization? Would reconciling the economy and the environment require mainly technical means by using more resource-efficient technologies or mainly social and structural changes that would include political decision-making as well as changes on private consumption patterns? The 1980 World Conservation Strategy of the International Union for the Conservation of Nature, was the first report that included a very brief chapter on a concept called "sustainable development". It focused on global structural changes and was not widely read. The UN initiated an independent commission, which was asked to provide an analysis of existing problems and ideas for their solution, similar to earlier commissions such as the Independent Commission on International Development Issues (Brandt Commission) and the Independent Commission on Disarmament and Security Issues (Palme Commission).
In December 1983, the Secretary General of the United Nations, Javier Pérez de Cuéllar, asked the Prime Minister of Norway, Gro Harlem Brundtland, to create an organization independent of the UN to focus on environmental and developmental problems and solutions after an affirmation by the General Assembly resolution in the fall of 1984. This new organization was the Brundtland Commission, or more formally, the World Commission on Environment and Development (WCED). The Brundtland Commission was first headed by Gro Harlem Brundtland as Chairman and Mansour Khalid as Vice-Chairman.
The organization aimed to create a united international community with shared sustainability goals by identifying sustainability problems worldwide, raising awareness about them, and suggesting the implementation of solutions. In 1987, the Brundtland Commission published the first volume of “Our Common Future,” the organization’s main report. “Our Common Future” strongly influenced the Earth Summit in Rio de Janeiro, Brazil in 1992 and the third UN Conference on Environment and Development in Johannesburg, South Africa in 2002. Also, it is credited with crafting the most prevalent definition of sustainability, as seen below.
Events Before Brundtland
During the 1980s it had been revealed that the World Bank had started to experience an expanded role in intervening with the economic and social policies of the Third World. This was most notable through the events at Bretton Woods in 1945. The ideas of neoliberalism and the institutions promoting economic globalization dominated the political agenda of the world's then leading trading nations: the United States under President Ronald Reagan and Great Britain under Prime Minister Margaret Thatcher, both strident Conservatives.
These events led into an era of free markets built on a distortion of the international order forged in 1945 at Bretton Woods. Bretton Woods was transformed through the 1980s and 1990s, finally ending in 1995 with the establishment of the World Trade Organization ushered in by United States President Bill Clinton. Bretton Woods was formed as an arrangement among the industrialized nation states, but was transformed into a global regime of ostensibly free markets that privileged multinational corporations and actually undermined the sovereignty of the very national communities that established Bretton Woods.
The Brundtland Report was intended as a response to the conflict between the nascent order promoting globalized economic growth and the accelerating ecological degradation occurring on a global scale. The challenge posed in the 1980s was to harmonize prosperity with ecology. This postulated finding the means to continue economic growth without undue harm to the environment. To address the urgent needs of developing countries (Third World), the United Nations saw a need to strike a better balance of human and environmental well-being. This was to be achieved by redefining the concepts of economic development as the new idea of sustainable development, as it was christened in the Brundtland Report.
To understand this paradigm shift, we start with the meaning of the key term: development.
Resolution establishing the Commission
The 1983 General Assembly passed Resolution 38/161 "Process of preparation of the Environmental Perspective to the Year 2000 and Beyond", establishing the Commission. In A/RES/38/161, the General Assembly:
- "8. Suggests that the Special Commission, when established, should focus mainly on the following terms of reference for its work:
- (a) To propose long-term environmental strategies for achieving sustainable development to the year 2000 and beyond;
- (b) To recommend ways in which concern for the environment may be translated into greater co-operation among developing countries and between countries at different stages of economic and social development and lead to the achievement of common and mutually supportive objectives which take account of the interrelationships between people, resources, environment and development;
- (c) To consider ways and means by which the international community can deal more effectively with environmental concerns, in the light of the other recommendations in its report;
- (d) To help to define shared perceptions of long-term environmental issues and of the appropriate efforts needed to deal successfully with the problems of protecting and enhancing the environment, a long-term agenda for action during the coming decades, and aspirational goals for the world community, taking into account the relevant resolutions of the session of a special character of the Governing Council in 1982;"
Modern definition of sustainable development
The Brundtland Commission draws upon several notions in its definition of sustainable development, which is the most frequently cited definition of the concept to date.
A key element in the definition is the unity of environment and development. The Brundtland Commission argues against the assertions of the 1972 Stockholm Conference on the Human Environment and provides an alternative perspective on sustainable development, unique from that of the 1980 World Conservation Strategy of the International Union for the Conservation of Nature. The Brundtland Commission pushed for the idea that while the "environment" was previously perceived as a sphere separate from human emotion or action, and while "development" was a term habitually used to describe political goals or economic progress, it is more comprehensive to understand the two terms in relation to each other (We can better understand the environment in relation to development and we can better understand development in relation to the environment, because they cannot and should not be distinguished as separate entities). Brundtland argues:
"...the "environment" is where we live; and "development" is what we all do in attempting to improve our lot within that abode. The two are inseparable."
The Brundtland Commission insists upon the environment being something beyond physicality, going beyond that traditional school of thought to include social and political atmospheres and circumstances. It also insists that development is not just about how poor countries can ameliorate their situation, but what the entire world, including developed countries, can do to ameliorate our common situation.
The term sustainable development was coined in the paper Our Common Future, released by the Brundtland Commission. Sustainable development is the kind of development that meets the needs of the present without compromising the ability of future generations to meet their own needs. The two key concepts of sustainable development are: • the concept of "needs" in particular the essential needs of the world's poorest people, to which they should be given overriding priority; and • the idea of limitations which is imposed by the state of technology and social organization on the environment's ability to meet both present and future needs.
Most agree that the central idea of the Brundtland Commission's definition of "sustainable development" is that of intergenerational equity. In sum, the "needs" are basic and essential, economic growth will facilitate their fulfillment, and equity is encouraged by citizen participation. Therefore, another characteristic that really sets this definition apart from others is the element of humanity that the Brundtland Commission integrates.
The particular ambiguity and openness-to-interpretation of this definition has allowed for widespread support from diverse efforts, groups and organizations. However, this has also been a criticism; perceived by some notable commentators as "self-defeating and compromised rhetoric". It nonetheless lays out a core set of guiding principles that can be enriched by an evolving global discourse. As a result of the work of the Brundtland Commission, the issue of sustainable development is on the agenda of numerous international and national institutions, as well as corporations and city efforts. The definition gave light to new perspectives on the sustainability of an ever-changing planet with an ever-changing population.
The Report of the Brundtland Commission, Our Common Future, was published by Oxford University Press in 1987, and was welcomed by the General Assembly Resolution 42/187. One version with links to cited documents is available.
The document was the culmination of a “900 day” international-exercise which catalogued, analysed, and synthesised written submissions and expert testimony from “senior government representatives, scientists and experts, research institutes, industrialists, representatives of non-governmental organizations, and the general public” held at public hearings throughout the world.
The Brundtland Commission's mandate was to: “ re-examine the critical issues of environment and development and to formulate innovative, concrete, and realistic action proposals to deal with them; strengthen international cooperation on environment and development and assess and propose new forms of cooperation that can break out of existing patterns and influence policies and events in the direction of needed change; and raise the level of understanding and commitment to action on the part of individuals, voluntary organizations, businesses, institutes, and governments” (1987: 347). “The Commission focused its attention on the areas of population, food security, the loss of species and genetic resources, energy, industry, and human settlements - realizing that all of these are connected and cannot be treated in isolation one from another” (1987: 27).
The Brundtland Commission Report recognised that human resource development in the form of poverty reduction, gender equity, and wealth redistribution was crucial to formulating strategies for environmental conservation, and it also recognised that environmental-limits to economic growth in industrialised and industrialising societies existed. As such, the Report offered “[the] analysis, the broad remedies, and the recommendations for a sustainable course of development” within such societies (1987: 16). However, the Report was unable to identify the mode(s) of production that are responsible for degradation of the environment, and in the absence of analysing the principles governing market-led economic growth, the Report postulated that such growth could be reformed (and expanded); this lack of analysis resulted in an obfuscated-introduction of the term sustainable development.
The report deals with sustainable development and the change of politics needed for achieving it. The definition of this term in the report is quite well known and often cited:
- "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs". It contains two key concepts:
- the concept of "needs", in particular the essential needs of the world's poor, to which overriding priority should be given; and
- the idea of limitations imposed by the state of technology and social organization on the environment's ability to meet present and future needs."
The Brundtland Commission was chaired by former Norwegian Prime Minister Gro Harlem Brundtland. Politicians, civil servants, and environmental experts make up the majority of the members. Members of the commission represent 21 different nations (both developed and developing countries are included). Many of the members are important political figures in their home country. One example is William Ruckelshaus, former head of the U.S. Environmental Protection Agency. All members of the commission were appointed by both Gro Harlem Brundtland and Mansour Khalid, the Chairman and Vice Chairman.
The commission focuses on setting up networks to promote environmental stewardship. Most of these networks make connections between governments and non-government entities. One such network is Bill Clinton's Council on Sustainable Development. In this council government and business leaders come together to share ideas on how to encourage sustainable development. The Brundtland Commission has been the most successful in forming international ties between governments and multinational corporations. The 1992 and 2002 Earth Summits were the direct result of the Brundtland Commission. The international structure and scope of the Brundtland Commission allow multiple problems (such as deforestation and ozone depletion) to be looked at from a holistic approach.
The three main pillars of sustainable development include economic growth, environmental protection, and social equality. While many people agree that each of these three ideas contribute to the overall idea of sustainability, it is difficult to find evidence of equal levels of initiatives for the three pillars in countries' policies worldwide. With the overwhelming number of countries that put economic growth on the forefront of sustainable development, it is evident that the other two pillars have been suffering, especially with the overall well being of the environment in a dangerously unhealthy state. The Brundtland Commission has put forth a conceptual framework that many nations agree with and want to try to make a difference with in their countries, but it has been difficult to change these concepts about sustainability into concrete actions and programs. Implementing sustainable development globally is still a challenge, but because of the Brundtland Commission's efforts, progress has been made. After releasing their report, Our Common Future, the Brundtland Commission called for an international meeting to take place where more concrete initiatives and goals could be mapped out. This meeting was held in Rio de Janeiro, Brazil. A comprehensive plan of action, known as Agenda 21, came out of the meeting. Agenda 21 entailed actions to be taken globally, nationally, and locally in order to make life on Earth more sustainable going into the future.
Economic Growth is the pillar that most groups focus on when attempting to attain more sustainable efforts and development. In trying to build their economies, many countries focus their efforts on resource extraction, which leads to unsustainable efforts for environmental protection as well as economic growth sustainability. While the Commission was able to help to change the association between economic growth and resource extraction, the total worldwide consumption of resources is projected to increase in the future. So much of the natural world has already been converted into human use that the focus cannot simply remain on economic growth and omit the ever growing problem of environmental sustainability. Agenda 21 reinforces the importance of finding ways to generate economic growth without hurting the environment. Through various trade negotiations such as improving access to markets for exports of developing countries, Agenda 21 looks to increase economic growth sustainability in countries that need it most.
Environmental Protection has become more important to government and businesses over the last 20 years, leading to great improvements in the number of people willing to invest in green technologies. For the second year in a row in 2010, the United States and Europe added more power capacity from renewable sources such as wind and solar. In 2011 the efforts continue with 45 new wind energy projects beginning in 25 different states. The focus on environmental protection has transpired globally as well, including a great deal of investment in renewable energy power capacity. Eco-city development occurring around the world helps to develop and implement water conservation, smart grids with renewable energy sources, LED street lights and energy efficient building. The consumption gap remains, consisting of the fact that "roughly 80 percent of the natural resources used each year are consumed by about 20 percent of the world's population". This level is striking and still needs to be addressed now and throughout the future.
The Social Equality and equity as pillars of sustainable development focus on the social well-being of people. The growing gap between incomes of rich and poor is evident throughout the world with the incomes of the richer households increasing relative to the incomes of middle - or lower-class households.This is attributed partly to the land distribution patterns in rural areas where majority live from land. Global inequality has been declining, but the world is still extremely unequal, with the richest 1% of the world’s population owning 40% of the world’s wealth and the poorest 50% owning around 1%. The Brundtland Commission made a significant impact trying to link environment and development and thus, go away from the idea of environmental protection whereby some scholars saw environment as something of its sake. The Commission has thus reduced the number of people living on less than a dollar a day to just half of what it used to be,as many can approach the environment and use it.These achievements can also be attributed to economic growth in China and India.
Members of the Commission
Chairman: Gro Harlem Brundtland (Norway)
Vice Chairman: Mansour Khalid (Sudan)
Susanna Agnelli (Italy)
Saleh A. Al-Athel (Saudi Arabia)
Pablo Gonzalez Casanova (Mexico) (ceased to participate in August 1986 for personal reasons)
Bernard Chidzero (Zimbabwe)
Lamine Mohammed Fadika (Côte d'Ivoire)
Volker Hauff (Federal Republic of Germany)
István Láng (Hungary)
Ma Shijun (People's Republic of China)
Margarita Marino de Botero (Colombia)
Nagendra Singh (India)
Paulo Nogueira (Brazil)
Saburo Okita (Japan)
Shridath S. Ramphal (Guyana)
William D. Ruckelshaus (USA)
Mohamed Sahnoun (Algeria)
Emil Salim (Indonesia)
Bukar Shaib (Nigeria)
Vladimir Sokolov (USSR)
Janez Stanovnik (Yugoslavia)
Maurice Strong (Canada)
- Agenda 21
- Our Common Future
- Sustainable Development
- Nuclear power proposed as renewable energy
- "1991- The United Nations World Commission on Environment and Development".
- Iris Borowy, Defining Sustainable Development: the World Commission on Environment and Development (Brundtland Commission), Milton Park: earthscan/Routledge, 2014.
- History of Sustainability
- This Norwegian's past may connect with your future
- worldsustainability / PreludeToBrundtland
- United Nations. 1983. "Process of preparation of the Environmental Perspective to the Year 2000 and Beyond." General Assembly Resolution 38/161, 19 December 1983. Retrieved: 2007-04-11.
- Environment Magazine - What Is Sustainable Development? Goals, Indicators, Values, and Practice
- Manns, .J., "Beyond Brudtland's Compromise", Town & Country Planning, August 2010, pp.337-340
- United Nations. 1987. Report of the World Commission on Environment and Development, General Assembly Resolution 42/187, 11 December 1987. Retrieved: 2007-11-14
- Our Common Future, Report of the World Commission on Environment and Development, World Commission on Environment and Development, 1987. Published as Annex to General Assembly document A/42/427, Development and International Co-operation: Environment August 2, 1987. Retrieved, 2007.11.14
- DSD :: Resources - Publications - Core Publications
- DSD :: Resources - Publications - Core Publications |
National Geographic put out several maps of the Soviet Union from the time before it became a constitutionally socialist state in 1922, till after its dissolution in 1991. Russia was the largest and dominant state of the Soviet Union. After the collapse of the U.S.S.R. all the former Soviet Socialist Republics became independent states.
Below is a list of the maps of the Soviet Union produced by National Geographic.
Click on Map Titles that are Links to see a Picture of that map.
If you are interested in buying any of these maps check out my Purchase Maps page. You will find the link in the table of contents on the left.
1. New Balkan States and Central Europe - August 1914 (23"x18") This historic political map captured Central Europe as it stood at the start of the first world war.
1. Union of Soviet Socialist Republics, 1938-1944 - December 1944 (40.5"x26") This map from the world war 2 era has international boundaries according to Russian treaties of October 1, 1944. It also has boundaries noted in red as of January 1, 1938, before Germany invaded Poland.
2. Poland and Czechoslovakia - September 1958 (25"x19") This political map shows the boundaries between communist and non-communist countries. It contains notes about territories and administration during the cold war era.
3. Western Soviet Union - September 1959 (25"x19") This political map came out during the cold war era. It was the era that the Soviets launched the first artificial satellite into orbit.
4. The Balkans - February 1962 (25"x19") This map clearly shows the boundaries between the communist and non-communist countries. It shows Yugoslavia before it was changed to the Socialist Republic of Yugoslavia.
5. Eastern Soviet Union - March 1967 (25"x19") This map features many physical and political details. It has the 1938 Soviet boundaries along with the northern limits of wooded country.
6. Peoples of the Soviet Union - February 1976 (37"x23")
Side A: This map contains a wealth of information on the diverse cultures of the Soviet Union. It has illustrations and information about the cultures of twenty-four ethnic groups.
7. Soviet Union - February 1976 (37"x23")
Side B: This is a detailed political map of the Soviet Union giving a historical snapshot of it during this era.
8. The Face and Faith of Poland - April 1982 (37"x22.5")
Side A: This has a small physical map of Poland while it still was a part of the Soviet Republic. It gives all kinds of info such as the population, major cities, climate, religion, industries and much more. It also contains seven small maps showing Poland's boundaries during different times in history. Most of this map, however, has pictures and write-ups on the history of Poland.
9. Poland - April 1982 (37"x22.5")
Side B: This side of the map is completely filled with beautiful pictures and write-ups of the history and people of Poland.
10. Soviet Union - March 1990 (36.5"x22")
Side A: This is a political map of the Soviet Union just months before its collapse. It has an inset map showing population density.
11. Soviet Union - March 1990 (36.5"x22")
Side B: This map includes nine individual maps having an abundance of information on the Federation of Countries of the Soviet Union and their diverse peoples.
1. Russia and the Newly Independent Nations of the Former Soviet Union - March 1993 (35.5"x22")
Side A: This is a political map showing the boundaries of Russia and the newly independent nations after the fall of the Soviet Union.
2. Communism to Capitalism - March 1993 (35.5"x22")
Side B: Included is a large physical map of Russia and the newly formed nations. It has small insets showing the GNP per capita, exports, imports and populations of each new nation. There is also a write-up and timeline with illustrations and information showing the evolution of the Soviet state from 1613 to 1991. |
- Books ˅
- Extras ˅
- My Account ˅
Pre-K Page Per Day: Letters
Learn Alphabet Basics with Just One Page of Activities Each Day!
Sylvan Learning's Pre-K Page Per Day: Letters uses engaging games and activities to help children become familiar with alphabet basics, including:
· Alphabet Recognition
· Uppercase Letters
· Lowercase Letters
· Writing Letters
Students develop number recognition skills while they complete fun activities, such as:
· Following clear instructions to learn how to write each letter through tracing exercises
· Singing letter-of-the-day songs to familiar tunes such as "Bingo" and "Wheels on the Bus"
· Making letter art from everyday objects, such as an "M" out of two pairs of pants or an "N" from three pencils
· And much more!
With perforated pages that can easily be removed for short, portable lessons, Pre-K Page Per Day: Letters will help give your child daily exposure to activities that are both fun and educational! |
Silver poplar trees (Populus alba) have distinctive white foliage and bark that form a spreading crown with a rounded shape. These trees have invasive roots and can create a significant amount of litter in your yard. You can expect these resilient and adaptable trees to grow quickly under a broad variety of growing conditions in U.S. Department of Agriculture hardiness zones 3 to 9.
Size and Appearance
The silver poplar grows quickly and typically reaches heights between 60 and 100 feet with foliage that spreads out between 40 to 60 feet from the trunk. The tree's leaves grow 2 to 4 inches long and resemble maple leaves in form. The leaves are bright green with a silvery color on the back that continues up the succulent shoots that attach the leaves to the stems. In the fall the leaves sometimes take on a bright yellow or gold color. The flowers of the poplar are yellowish catakins that form on the tree in the spring before the leaves unfurl. Poplar flowers are relatively small and take the form of multiple drooping stems covered in fuzzy growth. The seeds of the tree begin to mature when the leaves reach their full size.
The silver poplar tree grows best in open areas where the tree will receive full exposure to sunlight throughout the day. You can grow silver poplar successfully in most soil types with a pH rating between 6 and 8. Silver poplar does not grow well in wet areas where the soil is regularly soaked with water. Compared to other species of poplar, the silver poplar is more tolerant of dry growing conditions and can grow in areas where the tree is exposed to salt spray and other areas with high soil salinity.
Pests and Diseases
There are a number of pests and diseases that can damage silver poplar trees. Some common diseases that can affect silver poplar include leaf spots, cankers on stems or branches, and leaf rust. The main insect pests of silver poplar are the carpenterworm and poplar borer. These boring insects can cause serious cosmetic damage to your trees and make it more susceptible to other diseases.
The root systems of silver poplar trees are aggressive and can cause problems if they are planted in the wrong location. The roots of silver poplars can clog drains and septic leach fields, and they may penetrate the foundations of buildings. Avoid planting silver poplars within 100 feet of a building. The roots of silver poplar also produce suckers that can spread the tree into the surrounding area, forming a thicket of silver poplars. Silver poplar trees also drop stems, leaves, seeds and flowers throughout the year and can produce a significant amount of litter on the ground around the tree. The limbs of silver poplars are relatively brittle and can break under heavy winds or a load of snow. |
Circulating viral infections may help explain the temporal and geographical patterns associated with the risk of developing childhood celiac disease, conclude Swedish researchers in the Archives of Disease in Childhood.
But the role of vitamin D during pregnancy may also have a part to play, they suggest.
They base their findings on a long term study of almost 2 million children up to the age of 15 who had been born in Sweden between 1991 and 2009.
In all, 6569 of these children from 47 hospitals across the country were diagnosed with celiac disease — a condition in which the small intestine is excessively sensitive to gluten, making it hard to digest food — before the age of 15.
Overall, the risk of diagnosis was around 10% greater among children born in spring (March-May), summer (June-August), and autumn (September-November) than it was among those born in winter (December-February).
But seasonal patterns differed by region. Risk of celiac disease was highest among those born in the south of the country, where sunlight in spring and summer is intense, than it was among children born in the north of the country, where springs are colder and summers shorter.
Furthermore, children diagnosed before the age of 2 seemed to be at increased risk of the disease if they were born in spring, while those diagnosed after this age were at increased risk if they were born in summer or autumn.
Year of birth was categorised into three periods to see if there were any differences in trends: 1991-1996, when there was an epidemic of new cases; 1997-2002 which followed the epidemic; and 2003-2009 when the epidemic had abated.
This showed that children born in 1991-6 were at increased risk of being diagnosed with celiac disease if they were born during the spring, while children born in 1997-2002 were at increased risk if born during the summer and autumn. Those born in 2003-09 were at increased risk if born in the autumn.
Risk of celiac disease was consistently higher among girls than it was among boys for all time periods and seasons.
This is an observational study so no firm conclusions can be drawn about cause and effect, added to which the study authors were unable to glean any information on potentially influential factors, such as infections and vitamin D status.
But they nevertheless speculate about possible explanations for their findings.
“One hypothesis for increased [celiac disease] risk and spring/summer birth is that those infants are more likely to be weaned and introduced to gluten during autumn/winter, a time characterised by exposure to seasonal viral infections,” they write.
Viral infections alter intestinal bacteria and increase the permeability of cells lining the gut, which could prompt the development of celiac disease, they suggest.
In Sweden, it is well known that the yearly epidemics of respiratory syncytial virus, rotavirus, and flu start in the south of the country and move northwards, which might also explain the associations seen, they add.
Low levels of vitamin D have also been linked to immune related diseases, such as multiple sclerosis, inflammatory bowel disease, and type 1 diabetes, although every child in Sweden is given state funded vitamin D supplements from 1 week of age up to the age of 2 years.
“A remaining possible link to sunlight and vitamin D is that pregnant women who give birth in spring have the lowest levels of vitamin D during late gestation when important programming and development of the fetal immune system takes place,” they suggest.
- Anneli Ivarsson et al. Season and region of birth as risk factors for coeliac disease a key to the aetiology? The BMJ, August 2016 DOI:10.1136/archdischild-2015-310122 |
Despite hopes to the contrary, magic bullets are few in the world of medical science. The exception may be fluoride. It is credited with being the primary factor in a dramatic reduction in dental caries in the last twenty years.
Fluoride is a natural component of minerals in rocks and soils. All water contains fluoride, but it is sometimes necessary to add it to some public supplies to attain the optimal amount for dental health.
The story of how fluoride came to have an essential role in the effort to achieve a cavity-free generation is not one of laboratory science. Rather it is one of careful scientific observation of a naturally occurring phenomenon, followed by wide experimentation. First noticed in the early part of this century when researchers observed that persons with "mottled teeth" experienced fewer dental caries than those without the discoloration. The phenomenon was traced to high amounts of naturally occurring fluoride in their drinking water. "Mottled teeth" is also known as fluorosis, a cosmetic defect characterized by white flecks, or in some severe cases, stained teeth.
Fluoride was first introduced into the drinking water of Grand Rapids, Michigan as part of a two-city community trial in 1945. The Grand Rapids undertaking was so successful that the control city of Muskegon soon insisted on fluoridating its water.
Since then, literally thousands of studies on fluorides and fluoridation have been completed, and more than 3,700 studies have been conducted since 1970 alone. The most recent national study on children conducted from 1986-1987 by the National Institute of Dental Research found "a clear and continuing benefit of community water fluoridation in preventing tooth decay."
The fluoridation of drinking water has expanded steadily, as has the use of fluorides in other ways. "Originally viewed as beneficial primarily for children, fluoride agents are now recognized as effective for all ages and of increasing importance in an aging population," according to Irwin Mandel, D.D.S., professor emeritus, Columbia University School of Dental and Oral Surgery. Virtually all toothpaste used in the United States contains fluoride. Fluoride mouthwashes and tablets are used in schools and homes, and topical fluorides are applied in dental offices. Around the world, where fluoridation of water supplies may not be realistic, fluoride-containing toothpaste is used. Unfortunately, those who consume high amounts of bottled water in place of fluoridated tap water may not be receiving its oral health benefits.
Fluoride has not achieved its fame nor results without its share of critics. It has been accused of being illegal, a communist plot, immoral and unconstitutional. It has been blamed for everything from cancer and birth defects to premature aging and, more recently, Alzheimer's disease and AIDS.
However, according to the American Dental Association, "The simple fact remains that there has never been a single valid, peer-reviewed laboratory, clinical or epidemiological study that showed that drinking water with fluoride at optimal levels caused cancer, heart disease or any other multitude of diseases of which it has been accused."
Based on the findings from hundreds of studies, both the National Institute of Dental Research and the U.S. Public Health Service support fluoridation as a safe, effective, equitable means of controlling dental caries. |
Evaluation research is one of the standard social research methods which is used for evaluative purposes. It includes methodology and assessment to provide an objective, systematic, and comprehensive evaluation to social programs. The goal of most evaluation researches is not only to assess but also to provide an explanation for the success of failure of the evaluated program. The method is applicable to the programs of the local, national, and international importance.
Evaluation research, as a rule, consists of five stages with certain methodological problems and principles of guidance. Conceptualization of the objectives of the researched program is the first but not least important stage in the evaluation research. Identification of the goals to be achieved by the program may sound simple, however, it is crucial to evaluate the programs implemented in such critical fields as juvenile delinquency and mental health. At this stage, vague goals shall be put more correctly and precisely for the further stages to estimate the achievements adequately.
Formulation of the research specifics and criteria to prove the effectiveness of the program can be achieved in the laboratory conditions. Researchers conduct the experiment which measures the changes produced by the program and their effect on the target audience. Then it is crucial to anticipate errors which occur during the experiment as the ideal conditions can never be recreated in the laboratory. At the next stage, researchers estimate the effectiveness of the program which shows whether the program achieved its goals. Whether the involved costs were worth the benefits provided by the program? This is how effectiveness is defined. Not only measuring but explaining the estimated effectiveness takes place during the evaluation research, however, this stage may have rather theoretical application. |
Droughts are a farmer’s worst nightmare: Crops meant for the dinner table wither away in the dry heat leaving people hungry and farmers broke.
Not all plants are as sensitive to drought, though, and it is the genetic makeup of these more resilient plants that is of interest to scientists who feel the need to develop crops that can handle drastic shifts in their environments.
U.S and Finnish researchers recently discovered the specific gene responsible for controlling the amount of water released by the plant as it absorbs carbon dioxide-more specifically, the gene that controls the plant’s stomata.
|The stomatic pore in a tomato leaf.
All leaves are covered with stomata, which are tiny pores used to suck up carbon dioxide and to release water vapor back into the air.
Some of the ‘hardier’ plants close up their stomatal pores when ozone levels increase.
This reaction also reduces the amount of water lost during the harsher seasons. (It is interesting to note that plants suffer from excessive amounts of ozone rather than thriving in a CO2 rich environment when they use this specific gas for growth.)
The gene in question controls when the stomata are open or closed. Unfortunately, with their stomata closed, plants are unable to absorb the excessive amounts of CO2 in our atmosphere.
Up to 95% of water loss occurs through these pores while they are open, so manipulating the genetic makeup of plants to increase their sensitivity to droughts (forcing them to close their stomata) could have a positive effect on their survivability: A little water lasts much longer.
This may slow plant growth since CO2 is a necessary component for photosynthesis and plant development (with the stomatal pores closed, less CO2 makes it into the plants’ system), but a smaller plant is still better than a dead one.
Researchers claim that within the next few years plants could be genetically modified to hold on to the precious water that is so hard to come by during a drought, while still being able to absorb the CO2 they need for photosynthesis.
This is a win-win situation: It will allow crops to survive in arid regions while also sequestering the atmosphere’s CO2.
via Science Daily |
Q--I understand that passengers aboard commercial aircraft are subject to higher levels of radiation than people on the ground. That being true, is the exposure less at night or in winter?
A--Most of this radiation stems from the sun and sources outside the solar system, therefore the amount tends to be constant. The main protection is the Earth`s atmosphere, so the higher one flies, the greater the radiation exposure.
Q--Why do conversion tables list the pound, a unit of weight or force, as comparable to the gram, a unit of mass? Should they be using either newtons or slugs instead?
A--Your question exemplifies the confusion arising from the different systems of measurement and the ultimate need for universally accepted units. The pound is a unit of either mass or weight in the avoirdupois system. Likewise the gram is a unit of both mass and weight in the metric system. The distinction between weight and mass is a subtle one.
Weight defines the response of an object to the Earth`s gravity. It is used for such objects as people and groceries. Mass defines the inertia of an object--its resistance to acceleration. Tiny objects, such as atomic particles, are usually described in terms of mass, rather than weight.
For reasons not fully understood, recorded weight and inertial mass are the same. The mass of a person weighing 100 pounds is also 100 pounds.
The newton is a unit of force. One newton will accelerate a mass of 1 kilogram at a rate of 1 meter per second in each second. The slug is a unit of mass that is accelerated by the ``pound force`` (equal to the Earth`s surface gravity) at a rate of 1 foot per second in each second. It is roughly equal to 32 pounds. Gravitational acceleration is roughly 32 feet per second in each second.
Unlike rocket boosting, whose effect depends on the mass of a spacecraft, gravity acts uniformly on an object, regardless of its mass. The poundal, rather than the pound, is a unit of force in the avoirdupois system. It accelerates 1 pound at 1 foot per second per second. |
Rules for Answering a Document Based Question
1) Read the Historical Context and Task.
2) Use the Historical Context and the Task to create a chart using When/Where/Who/What/Why.
3) Make a list of 5-10 Topic related Vocabulary Words to use in the essay.
4) Answer all the questions under the Documents.
5) Write the essay. Be sure to refer to 1 more than half of the documents in the essay.
1st sentence When and Where is the question taking place?
(example: During the 1700s there was a problem in North America.)
2nd sentence- Who is involved in the question?
(example: The American colonists became angry with the British.)
3rd sentence What will be described in the essay? (Combine the What and Why on the Chart)
(example: There were many reasons why the colonists decided to rebel from Great Britain.)
the number of documents = the number of body paragraphs
1st sentence transition sentence (Make a clear statement about what will be written in the paragraph) (example: One reason for the rebellion was )
2nd sentence Inside statement ( Use the following Document # __ states/shows)
3rd sentence Outside statement (Use your knowledge about the document to explain the document)
1st sentence BECAUSE (example : The colonists rebelled because )
EFFECT (example : The effect the rebellion has was
3rd sentence RESULT (example : The result for the United States was ..) |
When you think about ecosystem project ideas, do you immediately think about dioramas in a shoe box, like this one I found on pinterest?
Don’t get me wrong, dioramas are a great way for students to demonstrate their learning but it’s also the most common way. If you are like me, you are always looking for unique ways for students to express what they learned. That’s why I have a variety of ecosystem project ideas!
10 Ecosystem Project Ideas
Create Your Own Ecosystems or Habitats.
Have your students work in groups, research, and then create an ecosystem together. It can be something as simple as collecting pond water, organisms, and plants. You could also have students create individual habitats instead of an entire ecosystem. We created our own habitats and the students really enjoyed it. Together we discussed the importance of meeting our living things’ needs and a healthy environment. We had a habitat for ants, fish, worms, and so much more.
Create a Flap Book.
Provide students with a 12 x 9 strip of construction paper and several index cards (one per ecosystem you are studying). Have students name, draw, and color the ecosystem on the outside of the index card, and on the inside provide valuable information about the ecosystem inside. When you are done, it will look like this:
Create an Imaginary Ecosystem.
Have students create their own ecosystem but still requiring the characteristics of ecosystems such as needing to have both living and nonliving factors, populations, communities, and so on. Have students determine the food chains and much more. It will definitely require some creative thinking on their part, but it will definitely be fun!
Create an Ecosystem Mobile.
Students love creating mobiles and they make for a cute display. If you can’t find hangers to make mobiles, you can easily use other materials such as sticks (yes, sticks from trees.), dowels (found in craft stores), or paper towel rolls. When creating an ecosystem mobile, you can have students again use index cards like in the example above, designing the outside and describing the ecosystem on the inside. You could also have students get creative and design something that represents that ecosystem, such as a raindrop for the rain forest. Students will love this ecosystem project idea!
Read Around the Room.
Set out many books about ecosystems around the room and students are sure to get excited! Have different locations representing different ecosystems and then move students around from station to station. If you want, you can have student record in a chart or on one big piece of chart paper what they learned about that ecosystem. There are many great books out there on ecosystems.
Create a Scavenger Hunt.
What student doesn’t love a scavenger hunt? To create an ecosystem scavenger hunt, you would just place information about each ecosystem around your room in different locations. For instance in one spot you may have information about deserts and in another location information about grasslands. Then create a few questions for students to answer regarding each ecosystem. Students move around the room reading about each ecosystem and hunt for those questions. It’s a great way to sneak in some reading and just another ecosystem project idea.
Create an Accordion Book.
Can you tell I’m a crafty, foldable kind of gal? I just love hands-on activities and foldables. I think I wrote about this a little in my Going Wild for Ecosystems post. Drag out some construction paper or copy paper and have students fold it in half. Then have them draw the ecosystem at the top and write about its characteristics at the bottom of the half sheet. (See image below).
Do this with each half for however number of ecosystems you are studying. Then connect them all by gluing them (or taping) side by side. (see image above).
Create a Circle Book.
Are you looking for an ecosystem project idea that is easy-peasy? These circle books have been my latest obsession. I’ve even got some created that I haven’t uploaded yet! But just like any of the above, you don’t have to head to my store to purchase them, you could easily create them yourself! Provide each student with one circle per ecosystem you would like them to represent. Then on each circle have them illustrate the ecosystem on the top and describe its characteristics on the bottom. (Sensing a theme?) Then fold each circle in half back to back and glue them together to form your ecosystem circle book.
Project Based Learning.
Are you looking for a way to get in a little PBL? Why not have students design their own ecosystem zoo? (This is a shameless plug!) This project integrates area, perimeter, geometry, and STEM learning in your science classroom. Students work through a series of steps, including research, to design and build a model of their own ecosystem zoo! It’s differentiated and can easily be adapted!
Why not have your students create a display similar to a science fair? In this display, students would take a regular file folder (see image below) and attach pieces that describe the landscape, climate, plants, animals, and food chain/web of the ecosystem. Then have students place a world map in the middle and color all the locations in the world where their ecosystem can be found. This can also be done on a larger scale with an actual triboard.
I actually have this triboard materials (minus the file folder) for you to download FREE! Just click here to download it!
This is just a small sampling of some ecosystem project ideas. If you’re looking to save time, you can find many of these items inexpensively prepared for you in my store here, though you can also create them easily yourself.
Stay tuned because soon I’ll be writing ideas on teaching beyond ecosystems themselves and diving deeper into subcategories. 🙂 |
LinkedIn tells me that it has been over ten years since I started thescienceteacher.co.uk. Today the website houses science teaching resources that I hope challenge students to think deeply about science.
Along the way I’ve also been inspired to write some pages on pedagogy, as I’ve wrestled to better understand what works in science teaching and why. Below I’ve listed ten important pedagogical reflections that have mattered most to me. For those of you keen to implement some of these ideas back at school remember, “everything works somewhere and nothing works everywhere“. Thanks Dylan!
- Novices don’t think like experts. Understanding is about taking new knowledge and making meaning from it by connecting new ideas to pre-existing ones. (Piaget). It’s hard in science because many new ideas can’t be seen and many pre-existing ideas are wrong and resistant to change (Rosalind Driver).
- Great science teaching requires an understanding of progression. An important goal is conceptual change: start with concrete ideas and move to more abstract ones. Be clear on what you want students to know, do and by when.
- Great science teachers explain complex ideas by focusing on big ideas that remove unnecessary noise (Wynne Harlen).
- Science is a discipline that explains the physical world – it’s not just a collection of ideas. Start with an observation and go from there. Be passionate and love what you do. Every chemistry lesson should have a reaction, every biology lesson an organism and every physics lesson a surprise.
- Whole class practical work is probably not the best way to teach most scientific concepts but it is important. You wouldn’t think much of a footballer who couldn’t kick a ball. Focus in the on the key aspects you want students to learn. You won’t get better at science by just ‘doing science’!
- Don’t make knowledge the enemy of thinking. To apply, explore and predict we need lots of knowledge and this knowledge needs to be remembered. (Daisy Christodoulou). But knowledge cannot be the end goal. We need to think about what actions students are taking with this knowledge?
- Use challenge to motivate students and find out what they know. Be careful of using challenge to teach novices as this can overload students’ working memory.
- Motivation is key – for both teachers and students. Make students feel clever and remember, “you can’t touch their brain until you’ve touched their heart“. Quote from Clever lands.
- Practice is important. Teachers need to rehearse explanations and students need to consolidate through deliberate practice.
- Science teachers can make a difference so “know thy impact” through formative assessment (John Hattie). Formative assessment is most rich when it’s done through a task that makes learning visible so teachers can respond. For me it’s a task that challenges students – if they succeed, we can assume they understand and if they struggle we can intervene. MCQs are useful here. |
What is an adjunct?
Adjuncts are parts of a sentence that are used to elaborate on or modify other words or phrases in a sentence. Along with subjects, verbs, objects, and complements, adjuncts are one of the five main components of the structure of clauses.
A distinguishing feature of adjuncts is that their removal from sentences does not alter the grammatical integrity and meaning of the sentence. In other words, adjuncts expand on the word or phrase that they are modifying, but their presence is not needed for a sentence to function.
Nouns, adjectives, and adverbs can all be adjuncts. However, adverbial adjuncts are the most complex, so we will examine those in greater detail.
Adjuncts are usually adverbs or adverbial phrases that help modify and enrich the context of verbs in the sentence. For example, consider the following sentence:
- “She walked to the park slowly.”
In this sentence, the adjunct is the adverb slowly, which modifies the verb walked. Without this adjunct, the sentence could function on its own and still be grammatically correct. In this case, the sentence would read:
- “She walked to the park.”
There is nothing wrong with this sentence. The reader just doesn’t know at what speed she walked to the park. Here are some other examples of sentences with adverbial adjuncts in them:
- “The soccer team played the game in the rain.”
- “The bowling ball rolled quickly toward the pins.”
- “The man walks by the river often.”
In all of these sentences, the adjunct can also be removed without the sentence losing meaning or grammatical correctness.
Types of modification
Adjuncts can be used to modify words in the sentence in a variety of different ways. Typically, when adjuncts are used in a sentence, they expand on the frequency, place, time, degree, reason, or manner of the word or phrase they are modifying. Here are examples of adjuncts being used to modify all of these things:
- “Every day, the boy played basketball with his friend.”
- “The farmer plowed his field once a week.”
- “The tourists went to see the sights around the city.”
- “The lakes are beautiful in North Carolina.”
- “At 5:00 PM, the dog went to see if there was food in his bowl.”
- “The game began right after school.”
- “He jumped as high as he could.”
- “As tall as he was, he still could not reach the top cabinet.”
- “The plants grew tall because they received a lot of sunshine.”
- “She was good at math because she practiced a lot.”
- “The gazelle ran gracefully over the field.”
- “The river flowed swiftly.”
Types of adverbial adjuncts
As we can see in the examples above, words, phrases, and even entire clauses can function as adjuncts, and there are several different types that can be used. Single-word adverbs, adverb phrases, prepositional phrases, noun phrases, and adverbial clauses can all be used as adverbial adjuncts.
Here are examples of each type of adverbial adjunct:
- “He left the office quickly.”
- “He left the office very quickly.”
- “The group went swimming at the beach.”
- “Grandfather will give you your birthday present next month.”
- “The surfer seemed calm, even though the wave looked huge.”
Position of adjuncts
Adjuncts can occur in different sections of the clause; where they are positioned depends on the structure of the sentence. Sometimes it works better to put them into the initial position, sometimes the middle, and sometimes the final. For example, here are some sentences with adjuncts in different positions:
- “We arrived at noon.” (final position)
- “The salmon quickly swam.” (middle position)
- “In the middle of the meadow, there was a patch of daisies.” (initial position)
Sentences can also have more than one adjunct appearing in different parts of a clause. For example:
- “At the playground, the children ran quickly.”
In this sentence, both at the playground, and quickly are adjuncts. Both of these adjuncts modify the clause the children ran.
Another important note about adjuncts is that if they are placed too far away from the word or phrase they are modifying, or too near to another word or phrase, there can sometimes be confusion about what they are modifying. These are known as misplaced modifiers. For example, consider this sentence:
- “Reading books frequently improves intelligence.”
In this sentence, it is difficult to tell if frequently is modifying reading books or improving intelligence. Placing the adjunct in a better position will improve the clarity of the sentence. For example:
- “Frequently reading books improves intelligence.”
Noun Adjuncts and Adjectival Adjuncts
Adjuncts can also be nouns or adjectives. These occur so commonly, though, that they rarely need to be identified. Nevertheless, let’s look at what constitutes noun adjuncts and adjectival adjuncts.
Noun adjuncts are nouns that are used to modify other nouns. The resulting phrase is called a compound noun. For example:
- “The boy played with his toy soldier.”
In this sentence, toy is the noun adjunct, and it modifies the word soldier, creating the compound noun toy soldier. The meaning of the sentence would change if we left out toy, but the sentence would remain grammatically correct.
Noun adjuncts can also create single-word compound nouns, as in policeman, where the word police modifies the word man.
Adjectival adjuncts are just adjectives that come immediately before the noun they describe. They are more commonly referred to as attributive adjectives. They too can be removed without compromising grammatical correctness. Here is an example of an adjectival adjunct:
- “The white cat climbed onto the table.”
In this sentence, white is the adjectival adjunct, and it modifies the word cat. Again, leaving it out does not affect the grammar of the sentence. However, if we said, “The cat that is white climbed onto the table,” the adjective is no longer an adjunct because it is integral to the grammar of the sentence.
Get all volumes of The Farlex Grammar Book in paperback or eBook. |
According to The Tech Museum of Innovation, DNA is soluble in water because the sugar and phosphate molecules that make up the DNA backbone are hydrophilic. DNA bases are hydrophobic but are protected from the water by the DNA backbones of the two DNA strands.Know More
In order for a molecule to be soluble in water, it needs to be a polar molecule or have a charge. H2O is a bent molecule with the oxygen located in the middle. Oxygen is more electronegative than hydrogen, so it attracts the electrons more strongly, resulting in a partial charge difference between the oxygen and hydrogen atoms. These charge differences cause the hydrogen and oxygen of different water molecules to be transiently attracted to one another in hydrogen bonds.
Polar molecules have atoms that are able to form hydrogen bonds, typically hydroxyl (-OH) or carbonyl (C=O) groups. The DNA backbone consists of alternating ribose (sugar) and phosphate molecules. Phosphate is negatively charged, which is why DNA macromolecules are predominately negative. Ribose has multiple hydroxyl groups that are able to form hydrogen bonds with water.
Interestingly, the twist in double-stranded DNA is caused by the bases of DNA being hydrophobic while the backbones are hydrophilic. The twist compresses the bases closer together and prevents water from getting into the middle of the molecule.Learn more about Molecular Biology & DNA
Athletic DNA is an elite membership club for tennis professionals, trainers, coaches and upcoming juniors. It sells high-performance tennis apparel that includes tops, bottoms, hoodies, warm-ups, T-shirts and accessories.Full Answer >
DNA, or deoxyribonucleic acid, is genetic material that exists in nearly all living organisms, including humans. Any organism with nucleus-based cells has DNA. It is the building blocks forming an organism, and the DNA combination an organism is born with remains the same.Full Answer >
Mitochondrial DNA is DNA that is present inside the mitochondria of a cell. Mitochondrial DNA is not part of the DNA found in cellular chromosomes.Full Answer >
DNA is a long molecule composed of two chains of smaller molecules called nucleotides, each which contain a region of nitrogen called the nitrogenous base, a carbon-based sugar molecule called deoxyribose and a region of phosphorus called the phosphate group. There are four types of nitrogenous bases: adenine (abbreviated as A), thymine (abbreviated as T), guanine (abbreviated as G) and cytosine (abbreviated as C).Full Answer > |
Use Units to Understand and Solve Problems
Videos and lessons to help High School students learn how to use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in graphs and data displays.
Common Core: HSN-Q.A.1
Common Core - Number and Quantity
Common Core for Mathematics
Dimensional Analysis - explained - (part 1)
This video explains the fundamental concepts of dimensional analysis and demonstrate how important they are to solving problems.
Dimensional Analysis - explained - (part 2)
This video further explains the fundamental concepts of dimensional analysis and show how important they are to solving problems.
Solving Dimensional Analysis Problems - Unit Conversion Problems
This video works through two standards dimensional analysis problems. Dimensional analysis is a process to solve unit conversion problems.
Step 1: Write the two givens (beginning and ending)
Step 2: Find the middle with conversion factors
Step 3: Make sure the units cancel out
Stpe 4: Solve the problem.
Dimensional Analysis - Convert Two Units.
Using Dimensional Analysis to Solve Math Problems
Baking Problem, Yardstick Problem, Football Field Length Problem.
Dimensional Analysis - Interpret Units in Formulas
This video explains how dimensional analysis can be used to decide whether an expression/formula represents a length, area, volume or none of those.
Interpreting Line Graphs.
Drawing and Interpreting Line Graphs.
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. |
Most materials fall into two categories: conductors and insulators. These materials conduct or do not conduct electricity, respectively. Now, how well this electricity can flow through a conductor is a measure known as resistance. Electricity flows by raising electrons from a lower energy level to a higher energy level in the atoms of the material.
Electron Bands. Britney's Guide - 2005
Semiconductors are insulators at very low temperatures, but at a suitable temperature, the additional thermal energy allows electrons to jump into the conduction band.
In atoms, electrons are divided into energy levels which tend to form bands. The highest, filled energy level at which electrons occupy is called the valence band and the first level above the valence band is known as the conduction band. The valence electrons do not participate in conduction. In metals, the valence and conduction bands overlap and allow free electrons to participate in conduction, however, insulators have an energy gap that is far greater than the thermal energy of the electron. Semiconductors are somewhere between these two extremes, having an energy gap around 1 eV. (Britney's Guide)
Semiconductors can be elements such as Germanium, Zinc, and Silicon, or compounds. Also, some semiconductor materials can be intrinsic (pure semiconductors) or extrinsic. An intrinsic semiconductor is normally used in conjunction with thermal or optical excitement to raise electrons from the valence band to the conduction band.
Intrinsic N-type P-type
Britney's Guide - 2005
An extrinsic semiconductor is formed by "doping" the intrinsic semiconductor material with a VERY small number of impurity atoms. Doping takes an intrinsic semiconductor and adds or subtracts electrons from the material creating or destroying "holes". These holes flow in the opposite direction of the charge and are the result of an electron leaving it's parent atom and creating a vacancy for another electron to fill. The number of electrons headed in one direction is equal to the number of holes traveling in the opposite. Doping which creates holes is called p-type and doping which adds electrons to the semiconductor is called n-type. (Britney's Guide)
The most common n-type dopants for silicon are phosphorus and arsenic, while the most common p-type dopant for silicon is boron. When these two doping techniques are combined, a p-n junction is created. This junction is the basis of an electronic device called a diode.
The production of semiconductors requires a high degree of perfection in not only chemical purity but in crystalline perfection. The crystalline structures of semiconductors allow them to carry charge efficiently. However, if there are flaws in the structure, then energy bands can form inside of the lattice, interfering with the electrical operation of the semiconductor.
|Home||History||How They Work||Uses||Works Cited| |
What is Lymphatic Filariasis?
Lymphatic filariasis is a debilitating disease caused by nematode worms of the genera Wucheriaand Brugia. Larval worms circulate in the bloodstream of infected persons, and adult worms live in the lymphatic vessels. Lymphatic filariasis is not life threatening, but it does cause discomfort, swelling of the limbs and genitals, damage to the kidneys and lymphatic system, impairment of the body's ability to fight infection, and general malaise. In addition, it causes immeasurable emotional and economic costs in terms of the disruption of family and community life. Approximately 120 million people in the world have the disease, and infection rates are increasing with the continued expansion of urbanization that is underway in the tropics.
How do people contract Lymphatic Filariasis?
Humans contract filariasis when they are bitten repeatedly by mosquitoes infected with filarial worms. Over 70 species of mosquitoes in the genera Culex,Anopheles,Aedes,and Mansoniacan infect humans with the disease. Mosquitoes pick up the tiny, microfilarialform of the parasite when taking blood meals from infected humans. In the mosquito, the microfilariaedevelop within 7-21 days into members of the next stage of the parasite's life cycle, which are known as filariform larva. The filariform larva are infective to humans. When the larval worms move to a mosquito's mouth, and then the mosquito bites humans, the parasites can spread through a human community. Fortunately, however, many bites from infected mosquitoes are required before a person is infected with the disease.
Once a human does pick up filariform larvae from mosquito bites, the larvae move to the lymphatic system, where they develop into adult worms. It usually takes 8-16 months after infection for symptoms of the disease to appear. The life span of adult worms is approximately seven years (microfilariae have a life span of from 3-36 months). The adults range in size from 2-50 cm in length. In the human, the adult worms mate and then the females produce millions of new microfilariae, which then circulate in the blood stream. Microfilariae circulating in the bloodstream can then be picked up by mosquitoes taking blood meals. In most endemic regions, microfilariae show peak abundance in the human bloodstream between 10 p.m. and 2 a.m., which corresponds with the time when Culexmosquitoes are most active. In some regions of the South Pacific, however, where the vectors of filariasis are active primarily in the daytime, microfilariae are most abundant during the day. These observations are consistent with the hypothesis that the microfilariae-abundance cycle in the bloodstream has evolved to maximize transmission to mosquitoes.
What is the geographic distribution of Filariasis?
Lymphatic filariasis occurs in the tropics of India, Africa, Southern Asia, the Pacific, and Central and South America. The largest fraction of cases occurs in Southeast Asia, with the second largest fraction occurring in Africa. The disease has increased in frequency with a global expansion of urbanization; urbanization brings an increase in breeding sites for vector mosquitoes such as Culex pipiens.
How can Lymphatic Filariasis be Treated and Controlled?
Screening. Screening for the disease has traditionally been difficult, requiring a microscopic examination of a blood sample. Often, this blood sample had to be collected in the middle of the night in order to correspond with the time of peak microfilariae abundance. However, a simple effective ELISA test for antigens of the parasite in blood samples collected any time of the day is now available, making screening far easier.
Treatment. Treatment of filariasis involves two components: (1) getting rid of the microfilariae in people's blood, so that the transmission cycle can be broken and (2) maintaining careful hygiene in infected persons to reduce the incidence and severity of secondary (e.g., bacterial) infections. Anti-filariasis medicines commonly used include diethylcarbamazine, which reduces microfilariae concentrations and also kills adult worms, albendazole, which kills adult worms, and ivermectin, which kills the microfilariae produced by adult worms. The disease is usually treated with single-dose regimens of a combination of two drugs, one targeting microfilariae and one targeting adult worms (i.e.,either diethylcarbamazine and albenadazole, or ivermectin and albendazole). If a high enough coverage of anti-filariasis drug treatment can be achieved (treating greater than 80% of the people in a community), the disease can be eradicated from an area. Attempts to eliminate the disease are being helped considerably by Merck and Co., which is donating ivermectin to treatment efforts, and Smith Kline Beecham, which is donating albendazole. The widespread treatment of populations in endemic areas with albendazole has the added benefit of reducing the incidence of intestinal parasite infections, which will serve to dramatically improve the health of individuals suffering those infections, particularly women and children. Attempts to reduce, and eventually eliminate lymphatic filariasis will be facilitated by the fact that humans are essentially the only reservoirs, and that the parasite does not increase in numbers in mosquitoes, but only in humans. In addition, the inefficiency with which filariasiss is transmitted (many bites from infected mosquitoes are required to infect a human) further improves the chances of eradicating the disease.
[This description of medicines is given for general information purposes only; contact your health care provider for details on specific treatment options.]
Vector Control. Control of lymphatic filariasis rests in part on control of mosquito vectors. Covering water-storage containers and improving waste-water and solid-waste treatment systems can help by reducing the amount of standing water in which mosquitoes can lay eggs. In addition, killing eggs (oviciding) and killing or disrupting larva (larviciding) in bodies of stagnant water can further reduce mosquito populations. People in endemic areas can reduce the probability of being infected with filariasis by decreasing the number of times they are bitten by mosquitoes. Such personal protection measures can include wearing long sleeves, applying insect repellent, using insecticide-impregnated bed nets, and remaining inside when mosquitoes are most active.
Where can you find out more about Lymphatic Filariasis?
- Dreyer, G., Noroes, J., Figueredo-Silva, J., Piessens, W. F. 2000. Pathogenesis of lymphatic disease in bancroftian filariasis: a clinical perspective. Parasitology Today16: 544-548
- Ottesen, E.A. 2000. The global programme to eliminate lymphatic filariasis. Tropical Medicine & International Health5: 591-594.
- Cox, F.E. 2000. Elimination of lymphatic filariasis as a public health problem. Parasitology Today16: 135.
- Lalitha, P., Ravichandran, M., Suba, S., Kaliraj, P., Narayanan, R. B., Jayaraman, K. 1998. Quantitative assessment of circulating antigens in human lymphatic filariasis: a field evaluation of monoclonal antibody-based ELISA using blood collected on filter strips. Tropical Medicine & International Health 3: 41-45.
- Nutman, T. B. (Ed.). 2000. Lymphatic filariasis. Imperial College Press, London.
- Guerrant, R. L., Walker, D. H., and Weller, P. F. (Eds.) 2001. Essentials of Tropical Infectious Diseases. W. B. Saunders, Philadelphia.
- Beaty, B. J., and Marquardt, W. C. (Eds.) 1996. The Biology of Disease Vectors.Univ. of Colorado Press, Niwot, Colorado.
- The World Health Organization
- UNDP-World Bank-WHO-Special Programme for Research and Training in Tropical Diseases
- The Centers for Disease Control
--Prepared by John P. Roche--
Last updated: March 1, 2002 |
Why do people live near volcanoes?
At first it may seem odd that people would want to live close to a volcano. After all, volcanoes have a nasty habit of exploding, discharging liquid rock, ash, poisonous gasses, red hot clouds of embers, and generally doing things that kill people. Yet, throughout history, people have deliberately chosen to risk all those hazards and live near them, even on the slopes of active volcanoes that have erupted within living memory.
They chose to live close to volcanoes because they felt that the advantages outweighed the disadvantages. Most volcanoes are perfectly safe for long periods in between eruptions, and those that do erupt more frequently are usually thought of, by the people who live there, as being predictable.
Today, about 500 million people live on or close to volcanoes. We even have major cities close to active volcanoes. Popocatapetl (pronounced poh-poh-kah-teh-peh-til) is a volcanic mountain less than 50 miles from Mexico City in Mexico.
In short, the main things that attract people to live near active volcanoes are minerals, geothermal energy, fertile soils and tourism.
Lets look at each one...
MineralsMagna rising from deep inside the earth contains a range of minerals. As the rock cools, minerals are precipitated out and, due to processes like the movement of superheated water and gasses through the rock, different minerals are precipitated at different locations. This means that minerals such as tin, silver, gold, copper and even diamonds can be found in volcanic rocks. Most of the metallic minerals mined around the world, particularly copper, gold, silver, lead and zinc are associated with rocks found deep below extinct volcanoes. This makes the areas ideal for both large scale commercial mining and smaller scale local activities by individuals and small groups of locals. Active and dormant volcanoes have the same mineralisation, so like extinct volcanoes, they are rich sources of minerals.
Hot gasses escaping through vents also bring minerals to the surface, notably sulphur, which collects around the vents as it condenses and solidifies. Locals collect the sulphur and sell it.
Geothermal energy means heat energy from the earth. It's unusual to use the heat directly, by building your house on top of a steam vent for example, because it's unpredictable, dangerous and messy.
The heat from underground steam is used to drive turbines and produce electricity, or to heat water supplies that are then used to provide household heating and hot water. Where steam doesn't naturally occur it is possible to drill several deep holes into very hot rocks, pump cool water down one hole and extract steam from another hole close by.
The steam isn't used directly because it contains too many dissolved minerals that could precipitate out and clog pipes, corrode metal components and possibly poison the water supply.
Countries such as Iceland make extensive use of geothermal power, with approximately two thirds of Iceland's electricity coming from steam powered turbines. New Zealand and to a lesser extent, Japan, also make effective use of geothermal energy.
Volcanic rocks are rich in minerals, but when the rocks are fresh the minerals are not available to plants. The rocks need thousands of years to become weathered and broken down before they form rich soils. When they do become soils though, they form some of the richest ones on the planet. Places such as the African Rift Valley, Mt Elgon in Uganda, and the slopes of Vesuvius in Italy all have productive soils thanks to the breaking down of volcanic rocks and ash. The Naples area, which includes Mount Vesuvius, has such rich soils thanks to two large eruptions 35,000 and 12000 years ago. Both eruptions produced very thick deposits of ash and broken rocks which have weathered to rich soils. Today, the area is intensively cultivated and produces grapes,vegetables, orange and lemon trees, herbs, flowers and has become a major tomato growing region.
Volcanoes attract millions of visitors every year, for different reasons. As an example of the wilder side of nature, there are few things that can beat seeing an erupting volcano blowing red hot ash and rock thousands of feet into the air. Even the less active ones that are just puffing out steam and smoke are impressive sights and attract tourists from around the world.
Around the volcano may be warm bathing lakes, hot springs, bubbling mud pools and steam vents. Geysers are always popular tourist attractions, such as Old Faithful in the Yellowstone National Park, USA. Old Faithful is such a popular tourist feature that it even has its own 24 hour Old Faithful webcam.
Iceland markets itself as a land of fire and ice, attracting tourists with a mix of volcanoes and glaciers, often both in the same place. The wild, raw and barren volcanic landscapes also attract tourists who want to see what the early planet may have looked like.
Tourism creates jobs in shops, restaurants,hotels and tourist centres / national parks. Locals economies can profit from volcanism throughout the year, whereas skiing, for example, has only a limited winter season.
In Uganda, a country trying hard to increase its tourist industry, the volcanic region around Mt Elgon is being heavily promoted for it's landscape,huge waterfalls, wildlife, climbing and hiking and its remote 'get away from it all' location. |
|Soil moisture Meter|
- Surface irrigation: In this method, one has to rely on gravitational power to push the water from one location to another. In such extreme cases where gravitational power cannot push the water into the required location, pumps can be used as an alternative.
- Seepage Irrigation: It is a type of irrigation, where crops get water from underground resources. These types of irrigation can be expected in such places where there high water table exists.
- Sprinkler Irrigation: It is a type of irrigation where the water is passed through a series of pipes and then sprinkled over the crops. These sprinklers will be made rotational so that equal amount of water can be sprayed.
- Drip irrigation system: It is a typical irrigation system, where as the name suggests water falls on the crop drop by drop over the roots directly.
The ancient method of soil probe and feel method to know the moisture content in the soil have been replaced by the new technology called as the soil moisture meter. This instrument determines the moisture level in the soil and hence helps us in efficiently managing water
Moisture is very important for any kind of soil in order to for good crops. Soil moisture meter is a device which is used to measure the percentage of water content present in the soil. Soil moisture meter is an essential device for any person involved in the field of agriculture and its related fields. This device can measure the moisture content present in the soil and thus give the person an idea of how much amount of water need to be supplied to the plants that are grown at that time. This ensures healthy growth of the plants.Any farmer involved in agriculture or any person who maintains a garden can have a soil moisture meter to measure the humidity of the soil. It is not very costly and is quite a user friendly device for anyone to use it efficiently. When put to use, this device shows different ranges of water content in the soil by color indicators and each color denoting a particular range.There are various models available for the meter and each one of them work on a different principle. There are meters which use electrical resistance blocks and thermal dissipation blocks. A farmer uses a sensor called the electrical resistance block and the latter temperature sensors.
Commercial farmers grow different types of crops on a very large scale. They cannot afford to bear huge losses. For the plants give a better yield and generate more profit, maintaining a healthy soil and environment are the main criteria that should be made available. One of the factors that determines soil health is its dampness. It is very important to monitor the dampness throughout the life of the plant to make sure that the right amount of water is supplied when it is dry and follow a dry period if the soil dampness has increased sue to some reasons.
Soil moisture meter is a must have device for for any person involved in agriculture or any related field as this device guides how much and when to water the plants to ensure their healthful development continuity. |
Learning to identify common wild birds can be a rewarding process. Not only will you learn more about the birds that inhabit your backyard, but you'll also be able to more fully appreciate the birds you encounter on a daily basis. Bird identification enthusiasts (called "birders") often make it their goal to see and identify every type of bird in their area. Many birders keep a life list of all the birds they have seen. Novice birders often find that identifying common wild birds is a logical start to this hobby.
- Skill level:
Other People Are Reading
Things you need
- Field guide to area birds
- Binoculars (optional)
Use a field guide to local birds to determine what types of birds are common to your area and to learn the type of vocabulary you'll need to use when describing birds. Most field guides include a glossary that can help you become as specific as possible when describing the parts of a bird.
Find a bird that you want to identify, and position yourself so that you can see it well. If the bird is difficult to clearly see, use binoculars to get a better view.
Take notes on the bird's size and shape. You do not need to know the bird's exact size, but rather its general size relative to other birds. Is the bird you see closer to the size of a raven, a robin or a sparrow, for example. When determining the bird's shape, consider the size of its head relative to its body, whether it is compact or lanky, if it has any crests, the size and shape of its tail and bill and the length of its legs and neck.
Note the bird's colouration. It is important to be as specific as possible. Note any changes in colour from one body part to the next, and write down any markings that you see.
Notice the behaviour of the bird. Does the bird hop along the ground or scurry vertically up a tree trunk?
Note the habitat that you saw the bird in. Not all birds can be found in all habitats, so it is important to note whether you came across the bird in a field, forest, near a body of water or at a backyard feeder, for example.
Compare the notes you took to possible candidates in your field guide to determine what kind of bird you have seen.
Tips and warnings
- Some people draw a sketch of an unknown bird or take a photo. If you do sketch a bird, try to be as detailed and accurate as possible, and use the sketch as a supplement to notes, not a replacement.
- Do not get discouraged if you have difficulty identifying common birds. Identifying birds is a skill that takes time to develop.
- If you have trouble with identification, check with your local birding group or conservation centre. Experts are normally more than happy to help novice birders with identification.
- Never attempt to touch or harass any wild bird.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
Learning about bats and rabies
Most bats don t have rabies. For example, even among bats submitted for rabies testing because they could be captured, were obviously weak or sick, or had been captured by a cat, only about 6% had rabies.
Just looking at a bat, you can t tell if it has rabies. Rabies can only be confirmed in a laboratory. But any bat that is active by day or is found in a place where bats are not usually seen like in your home or on your lawn just might be rabid. A bat that is unable to fly and is easily approached could very well be sick.
Bats and human rabies in the United States
Rabies in humans is rare in the United States. There are usually only one or two human cases per year. But the most common source of human rabies in the United States is from bats. For example, among the 19 naturally acquired cases of rabies in humans in the United States from 1997-2006, 17 were associated with bats. Among these, 14 patients had known encounters with bats. Four people awoke because a bat landed on them and one person awoke because a bat bit him. In these cases, the bat was inside the home.
One person was reportedly bitten by a bat from outdoors while he was exiting from his residence. Six people had a history of handling a bat while removing it from their home. One person was bitten by a bat while releasing it outdoors after finding it on the floor inside a building. One person picked up and tried to care for a sick bat found on the ground outdoors. Three men ages 20, 29 and 64 had no reported encounters with bats but died of bat-associated rabies viruses.
Why didn’t these people get the rabies vaccine?
In some cases, persons who died of rabies knew they were bitten by a bat. They didn t go to a doctor, maybe because they didn t know that bats can have rabies and transmit it through a bite.
In other cases, it s possible that young children may not fully awaken due to the presence of a bat (or its bite) or may not report a bite to their parents. For example, one 4-year-old patient, who died of rabies, was still sleeping when her caregivers checked on her because they heard strange noises. They found a bat on the floor of her bedroom. She was most likely bitten and did not fully awaken. This patient developed tingling and itching on her neck at what was probably the site of a bat bite as she became sick with rabies a few weeks later.
In another case, a 10-year-old child removed a bat from his bedroom without adult supervision and several months later developed tingling and itching on his arm and one side of his head as he became sick with rabies.
Rabies is a fatal disease. Each year, tens of thousands of people are successfully protected from developing rabies through vaccination after being bitten by an animal like a bat that may have rabies. There are usually only one or two human rabies cases each year in the United States, and the most common way for people to get rabies in the United States is through contact with a bat.
Those people didn t recognize the risk of rabies from the bite of a wild animal, particularly a bat, and they didn t seek medical advice. Awareness of the facts about bats and rabies can help people protect themselves, their families, and their pets. This information may also help clear up misunderstandings about bats.
Teach children never to handle unfamiliar animals, wild or domestic, even if they appear friendly. "Love your own, leave other animals alone" is a good principle for children to learn.
Wash any wound from an animal thoroughly with soap and water and seek medical attention immediately.
Have all dead, sick, or easily captured bats tested for rabies if exposure to people or pets occurs.
Prevent bats from entering living quarters or occupied spaces in homes, churches, schools, and other similar areas where they might contact people and pets.
The photos that appear on this site were provided courtesy of Bat Conservation International, Inc. (BCI) and were used with permission. The use of these photos and the shared development of this brochure do not imply endorsement of BCI's views, services, or products by the Public Health Service or the U.S. Department of Health and Human Services.
Get email updates
To receive email updates about this page, enter your email address:
- Centers for Disease Control and Prevention
1600 Clifton Rd
Atlanta, GA 30333
TTY: (888) 232-6348
- New Hours of Operation |
On this day 117 years ago, the Imperial Patent Office in Berlin conceded the German pharmaceutical company Friedrich Bayer & Co. the patent for Aspirin, its chosen brand name for acetylsalicylic acid, a painkilling drug. In no time it would become the most common drug in households all over the world.
In its primitive form, salicylic acid had been part of common folk medicines for a long time, as it can be found in the bark of several trees (e.g. willow trees), in certain fruits, grains and vegetables. Hippocrates, for example, told his patients to drink willow-leaf tea or chew bits of willow bark, as this would relieve pain and reduce fever. A closer study of the bark’s property began in the 18th century, and a chemical investigation of its healing powers was pursued in earnest when Napoleon’s continental blockade prevented the import of Peruvian cinchona-tree bark (another natural source of salicylic acid), already in medicinal use. Salicin, extracted from the willow bark, was eventually commercialised by the German Heyden Chemical Company for the treatment of pain and fever. Unfortunately, a prolonged intake of this drug serverely upset the stomach, causing nausea, vomiting, bleeding and ulcers.
In 1895, Arthur Eichengrün (1867-1949), head of Bayer’s chemistry research laboratory, gave Felix Hoffmann (1868-1946), a chemist at the lab, the task of finding a form of salicylic acid that had all its benefits but none of the negative side-effects. Hoffmann, whose rheumatic father was one of the victims of these negative drug effects, managed to chemically modify salicylic acid in 1897. The result was acetylsalicylic acid, a derivative that could be easily absorbed by the human body without losing any of the therapeutical benefits of the original drug. Unfortunately, Heinrich Dreser, the chemist in charge of the standardised testing of pharmaceutical agents, did not believe in the superior quality of this new drug and refused to do further testing. Eichengrün then sent the drug to various local hospitals. Their feed-back left no doubt as to the superiority of this analgesic in relation to other salicylates then in use. Under pressure, Dreser had to relent and proceed with the testing. Ironically, it was he who in 1899 published the first article on Aspirin and its benefits in the journals Die Heilkunde and Therapeutische Monatshefte.
On March 6 that year, Bayer received the patent rights for Aspirin. The brand name, incidentally, derived from ‘a’ for ‘acetyl’, ‘spir’ from the ‘spiraea’ plant (a source of salicin) and the suffix ‘in’, typical of medicine names. Bayer at once began to sell the drug in powdered form. In 1900, Aspirin appeared for the first time in tablet form on the market, which greatly helped its popularity. In 1915, Aspirin could be bought without a prescription, thus becoming one of the first over-the-counter, mass market household drugs in the world. It changed the way both doctors and patients dealt with pain and illness.
In the 1930s, Bayer credited Felix Hoffmann with the invention of Aspirin, although research in the 1990s proved that the leading chemist Arthur Eichengrün, of Jewish origin, had been a key figure from the beginning of Hoffmann’s research to Aspririn’s final success. More on this controversy: Walter Sneader’s report, Bayer’s press release or “Edward Stone and aspirin“.
The following sites may be of interest as well:
- Daniel Goldberg’s article about the history of aspirin
- History’s “On this Day”
- Details from the Aspirin Foundation
- Aspirin and its benefits, for example, in “Matters of the Heart”, or for women’s health. |
The mysterious tilt of the moon's orbit might come from an angled, giant impact that vaporized most of the early Earth, creating the moon in the process, a new study finds.
Earth and the other major planets of the solar system follow orbits around the sun that mostly lie within a thin, flat zone defined by the sun's equator. This is likely because these worlds arose from a protoplanetary disk of gas and dust encircling the sun's midriff.
Oddly, the moon's orbit is slightly inclined compared to Earth's orbit around the sun, by about 5 degrees. Until now, scientists could not reconcile the moon's tilt with the leading theory of how the moon formed. [Here's How The Moon Was Made (Video)]
Previous research suggested that during the early days of planet formation, the newborn Earth grazed a Mars-size rock called Theia (named after the mother of the moon in ancient Greek mythology). Debris from the impact later helped form the moon.
This giant-impact hypothesis seemed to explain many details about the moon and Earth, such as the large size of the moon compared with Earth and the rates of rotation of the two bodies. However, in the past 15 years, new evidence has challenged scientists to rework the details of this scenario.
In 2001, scientists began discovering that terrestrial and lunar rocks had more in common than expected: Earth and the moon possessed extremely similar levels of many isotopes. (Isotopes are versions of the same element with different numbers of neutrons.)
Prior work suggested that planetary bodies that formed in different parts of the solar system generally have different isotopic compositions. The isotopic similarities of Earth and the moon threw the giant-impact hypothesis into crisis because previous computer simulations of the collision predicted that 60 to 80 percent of the material that coalesced to form the moon came from Theia rather than Earth. The likelihood that Theia happened to have virtually the same isotopic composition as Earth seemed unlikely.
The latest version of the giant-impact hypothesis seeks to resolve this crisis by suggesting that an extraordinarily high-energy impact created the moon — one so violent that it vaporized not just Theia but also most of Earth, down to the young planet's mantle region (the layer just above the core). This dense vapor then formed a cloud more than 500 times bigger than today's Earth. Much of this material would have fallen back onto Earth as it cooled, but some of the debris would have gone on to form the moon. Previous research suggested that the material from Earth and Theia would have mixed together in the cloud, helping to explain why Earth and the moon have similar isotopic compositions.
One feature of this new model is that Earth was spinning very quickly after it got hit, taking maybe only 2 to 3 hours to complete a day. Prior work suggested that over billions of years, gravitational interactions between Earth and the moon slowed down both their rates of rotation, helping to explain why Earth now takes about 24 hours to complete a day.
However, until now, this new model could not explain the strange tilt of the moon's orbit.
"The inclination of the moon's orbit has been a major unsolved problem with the Earth-moon system," said Sarah Stewart, a planetary scientist at the University of California, Davis and a senior author of the new study. "With a giant impact, the moon forms from a disk around the moon's equator, and even though the dynamical evolution of the system is complicated, if the moon started near the Earth's equator, we expect that it should stay near the Earth's equator as it moves away from the Earth over time — but we instead see this 5-degree inclination," she told Space.com.
Now, Stewart and her colleagues suggest that the answer may be that the giant impact that created the moon hit Earth at a highly slanted or oblique angle. [How the Moon Evolved: A Photo Timeline]
"What's beautiful about this work is that we can end up with the current state of the moon — its orbit, its chemistry — with just one step, without invoking any other event," Stewart said. "We don't invoke a sequence of events that needs to be just right to explain the moon's current state."
The scenario the researchers modeled involves a complicated dance among Earth, the moon and the sun. It begins with the giant impact that formed the moon. That collision left Earth spinning very quickly, so much so that its shape became squashed, with its diameter at its equator twice that as its diameter from pole to pole. The impact also tilted Earth so its axis of spin was highly slanted compared with the sun's axis of spin by about 70 degrees.
As the moon slowly pulled away from Earth over time and both their rates of rotation slowed down, the moon reached a point called the "Laplace plane transition," where the influence of Earth on the moon became less important than gravitational forces from the sun. This led the sun to help slow the Earth's rate of spin, the researchers explained.
The process of the moon crossing the Laplace plane transition slanted Earth so that its axis of spin was more upright, to about its current tilt of 23.5 degrees compared with the sun's axis of spin. This in turn led the moon to orbit Earth at a high angle of about 30 degrees, Stewart said.
As the moon continued to slowly move away from Earth, it reached another milestone, the "Cassini state transition," wherein the gravitational pull of Earth influenced the angle of the moon, the researchers said.
"Because the Earth is tilted, gravitational forces between the Earth and moon are not equal at the poles and equator," Stewart said. "The net effect of that is to lower the inclination of the moon's orbit to its current 5 degrees."
The likelihood that the early Earth was hit with the right properties to explain the current tilt of the moon's orbit "is something like 30 percent," Stewart said. "It's reasonably likely."
Future research will pinpoint whether such an impact can help to explain the current chemistry of the moon and Earth, Stewart said.
The scientists detailed their findings online Oct. 31 in the journal Nature. |
Calculating the Number of Homes Powered by Solar Energy
The U.S. solar industry is growing at an unprecedented rate. There were 3,313 megawatts of direct current (MWdc) flat plate photovoltaic (PV) capacity installed in 2012, 76% more than installation levels in 2011. With over 8,500 MW of cumulative solar electric capacity, solar energy generates enough clean electricity to power over 1.3 million average American homes.
As solar becomes a more significant piece of the U.S. energy generation mix, it is important to understand just how many homes a megawatt of solar capacity can power. Below, we share how SEIA estimates the number of homes powered per megawatt of installed solar capacity, both photovoltaic (PV) and concentrating solar power (CSP), and the variables that need to be considered in this calculation.
Differences Between States
The average number of homes powered per MW of PV varies from state to state due to a number of factors including:
- average sunshine (also called insolation),
- average household electricity consumption, and
- temperature and wind
Solar resources are abundant across the United States. The state of Alaska receives the same amount of irradiance as Germany, the world leader in PV deployment. But each state receives different amounts of sunlight, which ultimately impacts the amount of energy generated by a solar energy system.
The chart below shows how solar energy system performance varies across different states.
System Performance Variance
The National Renewable Energy Lab’s PVWatts PV performance estimation tool uses solar resource measurements along with weather and other variables to estimate a PV system’s energy production. PVWatts version 1 provides system performance estimates from more than 200 testing locations across the U.S. SEIA used averages from all testing locations in each state to produce a state average estimate.
Electricity Consumption per Household by State
Electricity consumption varies significantly across all states due in part to differences in demographics, home size and characteristics, and weather. For example, a homeowner in a state like North Carolina–with hot, humid and long summers—uses more than twice as much electricity each year compared to a homeowner in New York State with shorter and relatively cooler summers.
The chart to the right highlights the differences in electricity consumption per household across a sample of states. Megawatt-hours consumed annually per home were provided by the Energy Information Administration (EIA).
Calculating Homes per Megawatt
The average number of homes per megawatt of PV for a given state is simply the quotient of the average PV system performance estimate and the average annual household consumption. The graphic below outlines the homes/MW methodology for NY. This calculation was repeated for every state.
Due to differences in PV system performance and annual energy consumption per household, the number of homes powered by a MW of solar can vary significantly from state to state. The chart below shows the average number of homes powered by a MW of solar in some of the main solar markets across the country.
National Average Homes/MW Methodology
The current national average of homes powered by a MW of solar is 164. The new average, which is slightly lower than SEIA’s previous 200 homes/MW estimate, reflects more current installation data and continued market diversity. More specifically, the 200 homes/MW figure was calculated when California represented between 80-90% of the U.S. solar market. At the end of 2012, California represented 35% of the cumulative national total of PV installations. The national average homes/MW figure is constantly changing as state markets continue to develop.
The methodology behind the national average calculation includes the following steps:
- First, the total number of homes powered by PV within each state was calculated.
- The first step in calculating the total number of homes powered within a state was to determine the total output generated by PV. The total PV generation within a state is the product of the respective state’s average PV system performance estimate and cumulative installed PV capacity.
- Then, the total PV generation was divided by the average annual electricity consumption per household within the respective state. The quotient is the total number of homes powered by PV within the state.
The flow chart below outlines the step-by-step process used to calculate the total number of homes powered by PV in New York. This process was repeated for all 50 states.
Once the total number of homes powered by PV was calculated in every state, the totals were summed to show the national total number of homes powered by PV. The national total was then divided by the national cumulative installed PV capacity. The quotient is the national average number of homes powered by a MW of PV. The flow chart below outlines the final step in the methodology.
Standard PV panel capacity is measured in direct current (DC) watts under Standard Test Conditions (STC). As such this is the way the industry typically tracks product volume. The DC output from panels must be converted to alternating current (AC) before being put to use in a home or distributed on the electric grid. Large, utility PV, CPV and CSP plants typically report their capacity already converted to AC watts. |
Sand is a natural unconsolidated granular material. Sand is composed of sand grains which range in size from 1/16 to 2 mm (62.5…2000 micrometers). Sand grains are either mineral particles, rock fragments or biogenic in origin. Finer granular material than sand is referred to as silt. Coarser material is gravel. Majority of sand is dominantly composed of silicate minerals or silicate rock fragments. By far the most common mineral in sand is quartz. Hence, the term “sand” without qualification is imagined to be composed of quartz mostly. However, sand is a natural mixture which means that it is never pure. By no means can one say that quartz and sand are the same thing. Consolidated sand is a rock type known as sandstone.
Colorful sand samples from various corners of the world:
1. Glass sand from Kauai, Hawaii
2. Dune sand from the Gobi Desert, Mongolia
3. Quartz sand with green glauconite from Estonia
4. Volcanic sand with reddish weathered basalt from Maui, Hawaii
5. Coral sand from Molokai, Hawaii
6. Coral pink sand dunes from Utah
7. Volcanic glass sand from California
8. Garnet sand from Emerald Creek, Idaho
9. Olivine sand from Papakolea, Hawaii
Formation of sand
Sand forms mostly by the chemical and/or physical breakdown of rocks. This process is collectively known as weathering. Physical and chemical weathering are usually treated separately, but in reality they usually go hand in hand and it is often difficult to separate one from another because they tend to support each other.
Chemical weathering is much more important sand-producing factor overall. It operates most efficiently in humid and hot climate. Physical weathering dominates in cold and/or dry areas. Weathering of bedrock which produces sand usually takes place in soil. Soil covers bedrock as a thin layer, providing moisture for the disintegration process of rocks.
Weathered rapakivi granite on the coast of Karelia, Russia (The Gulf of Finland).
Granite is a common rock type and serves as a great example of sand forming processes. Granite is composed of feldspar (pink and white) which decomposes chemically into clay minerals. Another important constituent of granite is quartz (gray). Quartz is very resistant to chemical weathering. It does not alter to any other mineral — quartz is quartz and will remain that way. It eventually goes into solutions but VERY slowly. Hence, disintegrated granite yields lots of quartz grains which will be transported mostly by running water as sand grains. The sample is from Italy. Width of view 21 cm.
Here is a picture of disintegrated granite (sand sample) from Sweden. It is a mixture of angular quartz and feldspar grains. The abundance of feldspar and angularity of the grains is a strong hint that this sand sample has not been transported long from its source area and the climatic conditions can not be humid and hot. Width of sample 20 mm.
Here is an example of mature sand sample from USA (St. Peter Sandstone from the Ordovician Period). It is composed of almost pure and well-rounded quartz grains. This is what eventually happens if we give enough time for the nature to chemically destroy most other minerals that were present in the source rocks. St. Peter Sandstone has seen much increased demand in recent years because it is well-suited for fracking purposes. Width of view 20 mm.
But what happens to other minerals? They are either converted to new stable minerals in atmospheric conditions (mostly clay minerals) or get carried away as ions in hydrous solutions and end up in the oceans. So these are freshwater rivers that carry ions to the sea and make it salty. There is a nice amount of irony in it. Clay minerals are carried by rivers also and we usually refer to this load of clay as mud. There is a muddy temporarily dry riverbed on the picture. Mud that covers these rocks is a mixture of clay minerals, fine sand, silt, and water. Barranco de las Augustias, Caldera de Taburiente, La Palma.
Composition of sand
Sand is a residual material of preexisting rocks. It is therefore composed of minerals that were already there in the rocks before the disintegration commenced. However, there is one important aspect — sand occurs in a harsh environment where only the strongest survive. By “strongest” I mean the most resistant to the weathering processes.
Quartz is one of these minerals (list of minerals in sand) but not the only one. It is so dominant in most sand samples because it is so abundant. 12% of the crust is composed of it. Only feldspars are more abundant than quartz. (Here is more information about the composition of the crust).
Relatively rare minerals like tourmaline, zircon, rutile, etc. are also very resistant to weathering, but they rarely make up more than few percents of the composition of sand. These minerals are collectively referred to as heavy minerals.
Heavy minerals may sometimes occur in sand in much higher concentrations. This is usually a result of hydrodynamic sorting. Either sea waves or river flow sort out heavier grains and carry lighter ones away. Such occurrences are known as placers and they are often used as a valuable mineral resource. Minerals that are often extracted from placer deposits are gold, cassiterite, ilmenite, monazite, magnetite, zircon, rutile, etc.
Concentrate of zircon extracted from beach sand in South Africa. Width of view 12 mm.
Quartz definitely dominates in most sandy environments, but it is usually accompanied by feldspars. Feldspars are only moderately stable in atmospheric conditions, but their overall volume in common rocks is huge. More than half of the whole crust is composed of feldspars. Other common rock-forming minerals like amphiboles and micas also frequently occur in sand. Some common minerals in certain rocks like olivine and pyroxenes occur in sand in smaller volume because their resistance to weathering is nothing to brag about.
However, there are enough sandy beaches that are mostly composed of pyroxenes and olivine with magnetite. How can anything like that happen? Such beaches with a black sand occur in volcanically active areas where quartz-bearing rocks are missing. Pyroxenes and olivine are common minerals in mafic rocks like basalt. Black sand is a typical phenomenon of oceanic volcanic islands where granite is missing and felsic quartz-rich rocks rare.
Basalt pebbles near the southern tip of La Palma slowly transforming into black sand typical to volcanic oceanic islands.
Black sand forms in volcanic islands if quartz and biogenic grains are not available. Here is a basaltic cliff and black sand on La Palma, Canary Islands.
Siesta Key beach sand in Florida, on the other hand, is composed almost exclusively of quartz grains and is therefore as white as it possibly can be.
Most sand samples consist of sand grains which are composed of a single mineral — quartz grains, feldspar grains, etc. But sand may also contain grains that are aggregates of crystals i.e. fragments of rocks (known also as lithic fragments). Lithic sand is usually immature and it also tends to form when rocks are very fine-grained. Granite usually disintegrates into distinct mineral grains, but phyllite and basalt for example are often so fine-grained that they tend to occur in sand as lithic fragments. Lithic fragments are also common in regions where erosion is rapid (mountainous terrain). You can find more about immature sand in this article: Sand that remembers the rock it once was.
Sometimes sand contains new minerals or mineral aggregates that were non-existent in the source rocks. Notable example is a clay mineral glauconite which forms in marine sand and gives distinctive dark green color to many sand samples. In some instances glauconite in sand may come from disintegrated glauconitic sandstone nearby, but eventually it is of marine origin anyway.
Glauconitic sand from France. Width of view 20 mm.
There are many other strange sand samples that require special formation conditions. One good example is sand in New Mexico that is composed of pure gypsum. I have written about it here: Gypsum sand. Sand with such a composition is odd and unexpected because gypsum is an evaporite mineral. It was precipitated out of hyper-saline water and it goes easily into solution again. Hence, it can only survive in dry conditions with no outlet to the sea. Halite, which is even more soluble than gypsum, is also known to form sand in special conditions.
Volcanic ash is usually treated separately, not as a type of sand. Probably because we humans tend to create artificial barriers and classification principles. We think that sand is a collection of sedimentary particles, but sedimentary and igneous rocks are two different worlds. In reality, this is more complicated because there is every reason to say that volcanic ash grains (and other pyroclastic particles like lapilli and volcanic bombs) are also sedimentary particles because they got deposited on the ground not much differently than sand grain in a dune does. Volcanic ash and sand have even comparable classification principles — volcanic ash is a pyroclastic sediment with an average grain size less than 2 millimeters. Hence, volcanic ash is a volcanic analogue of sand and silt.
Volcanic ash from St. Helens is composed of pumice fragments and mineral grains. Width of view 20 mm.
Third major and versatile component of sand (two others were mineral grains and lithic fragments) are grains of biogenic origin. Biogenic sand is composed of fragments of exoskeletons of marine organisms. Common contributors are corals, foraminifera, sea urchins, sponges, mollusks, algae, etc. Such sand is usually known as coral sand although in many cases it contains no coral fragments at all. Biogenic sand is light-colored and widespread in low latitude marine beaches although there are exceptions. Corals indeed live only in warm water, but many other taxons can do well in colder climate (coralline algae, clams, some forams). Most biogenic sand grains are calcareous and provide material for limestone formation. Most limestones are former calcareous muds deposited on the seafloor.
Sometimes sand contains or is entirely composed of well-rounded carbonate grains that are not fragments of dead marine organisms. These grains are ooids that also require special formation conditions.
Biogenic sand from Tuamotu is mostly composed of forams. Width of view 20 mm.
Ooid sand from Cancún, Yucatán, Mexico. Width of view 5 mm.
Sand does not need to be a pure collection of either mineral, lithic, or biogenic grains. In many cases two of them and sometimes even three are mixed.
Mixture of mafic volcanic rocks and various biogenic grains in a sand from the Azores archipelago. Width of view 20 mm.
Mixture of dark-colored volcanic rocks, worn-out biogenic grains, and some silicate grains from Jeju-do Island, South Korea. Width of view 20 mm.
Texture and transport of sand
Geologists describe sand by measuring the roundness of grains and the distribution of grain sizes. By doing that they hope to shed some light on the origin of the grains being measured. Roundedness usually gives information about the length of transport route and distribution of grain sizes helps to determine from which environment these grains come from. River sand is usually poorly sorted and compositionally immature. Beach sand is more rounded and eolian dune sand is generally well sorted.
Poorly sorted river sand from Sikkim, India. Width of view 20 mm.
The average size of grains is determined by the energy of the transport medium. Higher current velocity (either stream flow or sea waves) can carry heavier load. Coarse-grained sediments therefore reveal that they were influenced by energetic medium because finer material is carried away.
Sometimes river flow is so energetic that sand grains are all carried away and only large rounded stones remain. Such lithified deposits of former riverbeds are known as conglomerates. Photo taken in Cyprus.
Sand is mostly transported by rivers, but average sand grains are too large and heavy for average river to carry them in suspension. Hence, sand grains tend to move in jumps. They are lifted up by more energetic current and settle out when current velocity decreases and then wait for the next jump. This mode of movement is known as saltation. Average silt grain moves differently. It is light-weight enough to be carried in suspension for a long time and this is actually one of the most important reasons why we treat silt separately from sand.
Most sand grains carried by the rivers are eventually deposited at the rivermouths where the current velocity suddenly drops. Then sea waves (longshore currents) take over and carry sand along the coastline. Sand grains carried by the rivers are also deposited on alluvial flood plains and point bars (inside bend of streams where current flow is the slowest).
Sand is also transported by wind, ocean currents, glaciers, turbidity currents, etc. Moving sand forms landforms like ripples and dunes.
Wave ripples on a tidal flat in Ireland.
Dunes of Maspalomas on Gran Canaria.
Sand dunes near Stovepipe Wells, Death Valley (Mesquite Dunes).
Sand dune in Sahara (Morocco) on a windy day. |
What student—or teacher—can resist the chance to experiment with Velocity Radar Guns, Running Parachutes, Super Solar Racer Cars, and more? The 30 experiments in Using Physical Science Gadgets and Gizmos, Grades 3–5
, let your elementary school students explore a variety of phenomena involved with speed, friction and air resistance, gravity, air pressure, electricity, electric circuits, magnetism, and energy.
The authors say there are three good reasons to buy this book:
1. To improve your students’ thinking skills and problem-solving abilities.
2. To get easy-to-perform experiments that engage students in the topic.
3. To make your physics lessons waaaaay more cool.
The phenomenon-based learning (PBL) approach used by the authors—two Finnish teachers and a U.S. professor—is as educational as the experiments are attention-grabbing. Instead of putting the theory before the application, PBL encourages students to first experience how the gadgets work and then grow curious enough to find out why. Working in groups, students engage in the activities not as a task to be completed but as exploration and discovery using curiosity-piquing devices and doohickeys.
The idea is to motivate young scientists to go beyond simply memorizing science facts. Using Physical Science Gadgets and Gizmos
can help them learn broader concepts, useful thinking skills, and science and engineering practices (as defined by the Next Generation Science Standards
). And—thanks to those radar guns and race cars—both your students and you will have some serious fun.
For more information about hands-on materials for Using Physical Science Gadgets and Gizmos
, visit Arbor Scientific at http://www.arborsci.com/nsta-es-kits
Table of Contents
About the Authors
An Introduction to Phenomenon-Based Learning
How to Use This Book
Learning Goals and Assessment
PBL in Finland
Authors’ Use of Gadgets and Gizmos
2 Friction and Air Resistance
Dragging the Block
Sliding the Puck
Is Running a Drag?
4 Air Pressure
Fun Fly Stick
Having a Ball
6 Electric Circuits
Lighting a Lamp
Getting to Know the Switches
Learning Types of Connections
Creating Voltage With a Simple Battery
Creating Voltage With a Hand Crank
Basic Units of Electricity
Snaptricity Setups for the Circuits in This Chapter
Appendix: How to Order the Gadgets and Gizmos |
Washington [US], June 6 (ANI): Astronomers discovered complex organic molecules in the most distant galaxy to date, using NASA's James Webb Space Telescope.
The study was published in 'Nature.'The discovery of the molecules, which are found in smoke, soot, and smog on Earth, demonstrates Webb's ability to help understand the complex chemistry that goes hand in hand with the birth of new stars even in the universe's early history. The new findings, at least for galaxies, call into question the old adage that "where there's smoke, there's fire."Previous research has suggested that children who are breastfed for a longer period of time have better educational outcomes later in life. However, these are scarce, and most have not taken into account potential factors that could influence outcomes, such as the fact that mothers with higher socioeconomic status or intelligence scores are more likely to breastfeed their children for longer periods of time and have children with higher exam results. Using the Webb telescope, Texas AM University astronomer Justin Spilker and collaborators found the organic molecules in a galaxy more than 12 billion light-years away. Because of its extreme distance, the light detected by astronomers began its journey when the universe was less than 1.5 billion years old -- about 10% of its current age. The galaxy was first discovered by the National Science Foundation's South Pole Telescope in 2013 and has since been studied by many observatories, including the radio telescope ALMA and the Hubble Space Telescope.
Spilker notes the discovery, reported this week in the journal Nature, was made possible through the combined powers of Webb and fate, with a little help from a phenomenon called gravitational lensing. Lensing, originally predicted by Albert Einstein's theory of relativity, happens when two galaxies are almost perfectly aligned from our point of view on Earth. The light from the background galaxy is stretched and magnified by the foreground galaxy into a ring-like shape, known as an Einstein ring.
"By combining Webb's amazing capabilities with a natural 'cosmic magnifying glass,' we were able to see even more detail than we otherwise could," said Spilker, an assistant professor in the Texas AM Department of Physics and Astronomy and a member of the George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy. "That level of magnification is actually what made us interested in looking at this galaxy with Webb in the first place because it really lets us see all the rich details of what makes up a galaxy in the early universe that we could never do otherwise."The data from Webb found the telltale signature of large organic molecules akin to smog and smoke --building blocks of the same cancer-causing hydrocarbon emissions on Earth that are key contributors to atmospheric pollution. However, Spilker says the implications of galactic smoke signals are much less disastrous for their cosmic ecosystems.
"These big molecules are actually pretty common in space," Spilker explained. "Astronomers used to think they were a good sign that new stars were forming. Anywhere you saw these molecules, baby stars were also right there blazing away."The new results from Webb show that this idea might not precisely ring true in the early universe, according to Spilker.
"Thanks to the high-definition images from Webb, we found a lot of regions with smoke but no star formation and others with new stars forming but no smoke," Spilker added.
The University of Illinois Urbana-Champaign graduate student Kedar Phadke, who led the technical development of the team's Webb observations, noted that astronomers are using Webb to make connections across the vastness of space with unprecedented potential.
"Discoveries like this are precisely what Webb was built to do: understand the earliest stages of the universe in new and exciting ways," Phadke said. "It's amazing that we can identify molecules billions of light-years away that we're familiar with here on Earth, even if they show up in ways we don't like, like smog and smoke. It's also a powerful statement about the amazing capabilities of Webb that we've never had before."The team's leadership also includes NASA's Goddard Space Flight Center astronomer Jane Rigby, University of Illinois professor Joaquin Vieira and dozens of astronomers around the world.
The discovery is Webb's first detection of complex molecules in the early universe -- a milestone moment that Spilker sees as a beginning rather than an end.
"These are early days for the Webb Telescope, so astronomers are excited to see all the new things it can do for us," Spilker said. "Detecting smoke in a galaxy early in the history of the universe? Webb makes this look easy. Now that we've shown this is possible for the first time, we're looking forward to trying to understand whether it's really true that where there's smoke, there's fire. Maybe we'll even be able to find galaxies that are so young that complex molecules like these haven't had time to form in the vacuum of space yet, so galaxies are all fire and no smoke. The only way to know is to look at more galaxies, hopefully even further away than this one." (ANI) |
As we move forward each year, urbanization and industrial activity cause sustainability to fall to a new all-time low. Canadian birds are major contributors to our biodiversity but are victims of habitat loss, collisions, the climate crisis, and more, as a result of human actions. In fact, more than 3 billion birds, almost 1 in 3 individuals, have been lost from Canada and the US in under 50 years. We decided to build our package, NatureReads, that would process and visualize the data of the birds in an effort to help preserve the species.
What it does
Using NatureCounts, one of the largest databases on Canadian birds managed by Birds Canada, we developed various functions that visualize and filter information in meaningful graphs, tables, and maps. For example, people are able to find a map of the migration trend of a species, a species list for a specific area, or a graph of the number of sightings per province. We then compiled all of these functions in a package called NatureReads.
- Gets (or plots the distribution of) the top n most frequent species in an area.
- Generates a named list of the region data in a specific provided region
- Searches for a species by a name, returning the first ID in the search results
- Gets the English name for a species given its ID, falling back to the scientific name
- Gets (or plots) the most common species in each area of the provided data
- Finds observations in a region defined by a polygon
- Plots the population density of a species.
- Plots the estimated migration path of a species filtered by year.
- Plots a limited number of the locations of the observations of the given data which can be filtered by a specific species
- Plots the line graph of the number of sightings per year
- Plots the bar graph of the number of sightings per province
- Plots a limited number of sightings of the given data
- Calculates the relative observation trend for a species
- Gets (or plots) the area containing the most sightings of a species
- Gets the province or state containing the most sightings of the species in the given data
How we built it
We used R for all the functionalities of our package and used the naturecounts dataset for our data. For our functions to display the data, we used the Plotly library to visualize information in user-friendly and accessible ways. We also used some other libraries, such as dplyr, to make the data processing easier, and sf to process geographic data.
Challenges we ran into
Most of us had no experience with R before starting, so it was a challenge to learn the language and make good use of it in the limited time we had.
Accomplishments that we're proud of
Creating a functional package with useful functions for data processing and visualization in a limited time frame.
What we learned
- How to do data analysis and visualization with R
- R package design
What's next for NatureReads
We plan on making the API more self-consistent and adding more features to our package such as ML prediction models on bird locations.
Log in or sign up for Devpost to join the conversation. |
What is the herpes virus?
Herpes Simplex Virus (HSV) is a highly contagious virus that commonly causes sores on the mouth or genitals. Once you have it, it stays in your body forever. No medication can cure it completely, though you can control it through medications and home care.
Herpes is a virus that causes skin sores. The medical term for it is the Herpes Simplex Virus (HSV). It most commonly produces sores on the mouth or genitals.
Herpes is very common and doesn’t usually cause severe health problems. However, it is very contagious and there is no cure.
There are two types of herpes: oral and genital. Oral herpes is called HSV-1, and genital herpes is called HSV-2.
Signs and symptoms of herpes
Both kinds of herpes cause outbreaks of painful sores on the skin. Symptoms of herpes include:
Oral herpes sores
Sometimes called cold sores, HSV-1 produces painful sores that look like blisters at first. They eventually burst and crust over. It usually takes a week to 10 days for the sores to clear up.
Genital herpes sores
The sores that appear in the genital area can come from HSV-1 or HSV-2. Like the sores on the mouth, they start as painful blisters, then dry up and heal over time.
Although they usually show up around the mouth or genitals, herpes sores can appear anywhere on the body.
During a herpes outbreak, you might feel other symptoms such as fever, tiredness, or body aches.
Not all people who have herpes have frequent outbreaks. Some people might have a single outbreak then never show symptoms again. The virus may stay dormant in their body.
Causes of herpes
You can catch both kinds of herpes through direct contact with an affected person.
HSV-1 carriers can pass it along even if they don’t have symptoms. Any skin-to-skin contact can transmit the virus.
Touching an open herpes sore then touching another part of your skin can spread herpes to new areas, including your eyes.
Take care not to touch sores and wash your hands immediately if you do touch one.
People usually get HSV-2 through sexual contact. Oral, anal, and genital sex can all transmit herpes. You can get HSV-2 even if your partner doesn’t have any symptoms of the virus.
It also is possible to get HSV-1 on your genitals through oral sex.
Pregnant women can pass herpes on to their babies. In some cases, this can cause serious problems. If you are pregnant, you should discuss your herpes risk with your doctor.
Many people get HSV-1 as babies or children from non-sexual contact with saliva from an adult who already has the virus.
Anyone can get herpes, though people with weakened immune systems can be more susceptible to herpes infections.
Some people have periodic outbreaks. Other illnesses, sun exposure, menstrual periods, or stress can trigger these outbreaks.
People usually find that their first outbreak is the worst one. During that outbreak, the virus moves from the skin cells to nerve cells, where it will stay forever. Later outbreaks are milder and not as painful. Some people have a tingling sensation before a new outbreak starts.
Diagnosis for herpes
If you have an outbreak of sores, your doctor can examine them to diagnose herpes. They may take a swab from the sore to test it for the presence of the virus.
If you don’t have an outbreak, your doctor can order blood tests to diagnose herpes.
Treatments for herpes
Herpes is not a virus that goes away. Once you have it, it stays in your body forever. No medication can cure it completely, though you can control it.
There are ways to relieve the discomfort from the sores and medications to reduce outbreaks.
There are three prescription antiviral medicines your doctor might give you. They can all decrease the severity and frequency of outbreaks. They also can help prevent you from spreading the virus to other people.
The medications are:
At home, you have a few options to reduce the discomfort from herpes sores. Some options you can try include:
- Antiviral creams: You can buy antiviral cold sore medicine without a prescription. Products that contain docosanol or benzyl alcohol are helpful.
- Ice: Sucking ice chips or applying cold compresses to the sores can reduce pain.
- Pain relievers: Over-the-counter pain medicines can help. Topical medications that contain benzocaine, lidocaine, or dibucaine can reduce pain from sores. Oral pain medicine like acetaminophen or ibuprofen might also help.
If you have an outbreak or think you might be about to get an outbreak near your mouth, you should avoid kissing, oral sex, and sharing toothbrushes, towels, cups, and silverware.
If you or your partner has an outbreak of genital herpes, or if either of you think you may have an outbreak soon, you should not have sex.
Condoms can reduce the risk of spreading herpes. However, even with condoms, there is still a chance of transmitting herpes if the sores are in a place that the condom doesn’t cover.
Wash your hands well after touching sores or areas where you think a sore might be about to appear.
If you are pregnant, tell your doctor if you or your partner has genital herpes.
Latest Sexual Health News
Daily Health News
Trending on MedicineNet
Medically Reviewed on 1/25/2021
American Academy of Dermatology Association: “Herpes Simplex: Causes.”
American Academy of Dermatology Association: “Herpes Simplex: Diagnosis and Treatment.”
American Academy of Dermatology Association: “Herpes Simplex: Overview.”
American Academy of Dermatology Association: “Herpes Simplex: Signs and Symptoms.”
American Academy of Dermatology Association: “Herpes Simplex: Tips for Managing.”
Centers for Disease Control and Prevention: “Genital Herpes – CDC Fact Sheet.” |
Sustainable agriculture practices are essential for ensuring food security and minimizing the environmental impact of farming. With increasing water scarcity and growing concerns about water quality, it becomes crucial to adopt efficient water management strategies in agriculture. Reverse osmosis (RO) technology has emerged as a valuable tool in sustainable agriculture, offering a range of benefits that enhance crop yield while conserving water resources. In this blog, we will explore the role of reverse osmosis in sustainable agriculture, focusing on how it can improve crop productivity and contribute to water conservation efforts.
What is the Role of Reverse Osmosis in Sustainable Agriculture?
1- Water Purification and Irrigation:
One of the significant applications of reverse osmosis in sustainable agriculture is water purification for irrigation purposes. RO systems can effectively remove contaminants, salts, and other impurities from water sources, ensuring that crops receive clean and high-quality irrigation water. By using RO-treated water, the risk of soil salinity and nutrient imbalances is reduced, leading to improved crop health and increased yield. Additionally, the absence of harmful substances in the irrigation water reduces the chances of crop diseases and pest infestations.
2- Nutrient Management:
RO systems can also play a role in nutrient management, specifically in hydroponic and aquaponic systems. These systems rely on a nutrient-rich water solution to provide essential elements to plants. By using RO technology to purify water sources, farmers can precisely control the nutrient composition of the solution, ensuring optimal growth conditions for plants. This enables farmers to tailor the nutrient supply to the specific needs of different crops, resulting in improved nutrient uptake, enhanced plant health, and higher crop yields.
3- Efficient Water Usage:
Water scarcity is a significant challenge in agriculture, making efficient water usage critical for sustainable farming practices. Reverse osmosis can contribute to water conservation efforts in several ways:
a. Concentrate Management: RO systems generate a concentrate stream as a byproduct of the purification process. Instead of disposing of this concentrate, it can be used for other purposes, such as fertilization or non-potable water applications. This reduces water waste and maximizes the utilization of available resources.
b. Precision Irrigation: RO-treated water allows for precise control over irrigation volumes and timing. This precision irrigation approach ensures that crops receive the necessary amount of water, minimizing water loss through runoff or over-irrigation. By providing plants with the right amount of water at the right time, water usage is optimized, resulting in significant water savings.
c. Recycling and Reuse: RO systems can be integrated into water recycling and reuse systems on farms. Wastewater, drainage water, or runoff from fields can be treated with reverse osmosis, allowing for the recovery and reuse of the water. This reduces reliance on freshwater sources and minimizes the need for additional irrigation water, leading to overall water conservation.
4- Environmental Impact Reduction:
The use of reverse osmosis in agriculture contributes to the reduction of environmental impacts associated with conventional farming practices. By purifying water sources, the use of chemical fertilizers and pesticides can be minimized, resulting in less pollution of water bodies and soil. Additionally, efficient water usage through RO technology reduces the need for extensive water extraction, preserving natural water resources and ecosystems.
Reverse osmosis technology offers significant benefits in sustainable agriculture by enhancing crop yield and promoting water conservation. Through water purification, nutrient management, efficient water usage, and reduced environmental impact, RO systems contribute to more sustainable farming practices. By adopting reverse osmosis in agriculture, farmers can optimize irrigation water quality, reduce water waste, and minimize the use of chemicals, leading to improved crop productivity, water resource sustainability, and long-term agricultural viability. |
Black holes. They are not a rarity to look upon in terror. There are billions of them, of various types. They are scary, perhaps, but only because no one knows, for now, how they form or why they manage to become supermassive, millions or billions of times heavier than the Sun. We know that they are at the center of almost all galaxies.
What is a black hole?
In astrophysics, a black hole is a celestial body with a gravitational field so intense that it does not let matter or electromagnetic radiation escape, that is, from a relativistic point of view, a region of spacetime with such a great curvature that nothing from its interior can escape, not even light since the escape velocity is higher than c, precisely the speed of light.
The black hole is the result of implosions of sufficiently high masses. Gravity dominates over any other force so that a gravitational collapse occurs which tends to concentrate spacetime in a point in the center of the region, where a state of matter of curvature tending to infinity and volume tending to zero is theorized, called “singularity”, with characteristics unknown and extraneous to the laws of general relativity. The limit of the black hole is defined as the event horizon, a region that delimits its observable boundaries in a peculiar way.
Due to the above properties, the black hole is not directly observable. Its presence is revealed only indirectly through its effects on the surrounding space: the gravitational interactions with other celestial bodies and their emissions, the mainly electromagnetic irradiation of matter captured by its force field.
During the decades following the publication of general relativity, the theoretical basis of their existence, numerous observations were collected that could be interpreted, although not always uniquely, as evidence of the presence of black holes, especially in some active galaxies and binary X star systems. The existence of such objects is now definitively demonstrated and new ones with very variable mass are gradually identified, from values of about 5 to billions of solar masses.
A black hole is the exact solution of the field equations of Einstein’s theory of general relativity. The solution was discovered by German Karl Schwarzchild while serving in the army as a volunteer in World War I. His solution provides for the existence of singularities on a sphere of a given radius, which is called the Schwarzchild radius. If the radius of a stellar object is smaller than the Schwarzchild radius, then everything that has mass and even photons must inevitably fall into the central body. When the mass density of this central body exceeds a defined limit, a gravitational collapse is triggered which, if it occurs respecting a spherical symmetry, generates a black hole. The Schwarzschild solution, which makes use of Schwarzschild coordinates and the Schwarzschild metric, leads to a derivation of the Schwarzschild radius, which is the size of the event horizon of a non-rotating black hole.
There are perhaps 10 million to a billion black holes in the Milky Way alone, a galaxy of which the solar system and therefore all of us are part. The same order of magnitude for each of the billions of galaxies in the universe. You do the math.
We have recently been able to develop an observation system capable of “seeing” them. The quotes are necessary because as said we can see them only indirectly. What we do is translate frequencies not visible to the naked eye into colors. We “photographed” a couple of them so far.
Two close encounters
The first is located at the center of Messier 87, abbreviated M87, a huge elliptical galaxy, with a radius of about 150 kiloparsecs (1 parsec = 3.26 light-years, 1 kiloparsec is 1000 parsecs; by calculating 150 kiloparsecs are 490 thousand light-years), much more massive than our Milky Way (from the Latin via lactea, which derives from the Greek γαλακτικός κύκλος (galaktikos kýklos), which translates as “milky circle”; apparent diameter between 100 and 200 thousand light-years; recent simulations suggest that it is surrounded by dark matter extending over a diameter of about two million light-years).
The second is Sagittarius A*, abbreviated Sgr A*, 27 thousand light-years away from Earth, 1000 times less massive than the black hole of M87, 4 million solar masses against perhaps 6.5 billion of M87. While M87 is binge eating in a compulsive and exaggerated way, Sgr A* is on a strict diet, so much less bright. More difficult to observe: only 17 times larger than our Sun and 27,000 light-years away. Solving an image of Sgr A* is like solving the image of an apple on the lunar surface. To make things even more difficult there is the speed of rotation of the plasma at 1000 billion degrees centigrade that surrounds it, 1000 times faster than what happens around M87, which changes its appearance from one minute to the next.
The observations collected in 2017 required two years of work to process the image of M87, while it took five for Sgr A*. The two images are actually very similar. The implication is that, regardless of their size, when you get to the edge of a black hole, gravity commands everything.
When a star dies, because it has run out of nuclear fuel, if it is heavy enough, gravity overcomes the intrinsic resistance of matter and the star collapses catastrophically. The trace left, while the stellar matter continues to fall into the hole that has been generated, towards a completely unknown destiny, is the horizon of events. The remains of the sucked material orbit around it and their energy illuminates the scene. The trajectory of the emitted light is changed by the curvature of space caused by the black hole’s mass. The light emitted behind the black hole is then redirected towards the observer. You don’t see the black hole, but the disc of light that surrounds it. It’s the light, or rather electromagnetic radiation, which can be observed.
This is precisely what the EHT does, which stands for Event Horizon Telescope, a system consisting of 8 radio telescopes (LMT-Large Millimeter Telescope, Mexico; SPT-South Pole Telescope, Antarctica; SMT-Submillimeter Telescope, Mount Graham, Arizona; SMA-SubMillimeter Array, Maunakea, Hawai; ALMA-Atacama Large Millimeter / submillimeter Array, Chile; APEX-Atacama Pathfinder Experiment, Chile; JCMT-James Clerk Maxwell Telescope, Hawai; IRAM 30m, Institute of Millimetric Radio Astronomy, Pico Veleta, Andalusia, Spain) which, working in a coordinated way from the South Pole to Spain, make the Earth a single gigantic virtual telescope, with the required resolution.
Over the course of 10 consecutive nights, they observed and collected Sgr A* data. Billions of gigabytes. Too many even for the internet. More than 1000 ultra-high-capacity memory disks have been physically transported to two data processing centers: the Haystack Observatory near Boston, USA, and the Max Planck Institute for Radio Astronomy in Bonn, Germany.
The two photos are the result of decades of observation and research. A stop on the journey that began in 1918 when the astronomer Harlow Shapley was the first to observe the congregation of stars at the center of the Milky Way. Place in space where powerful radio emissions were then detected, suggesting the presence of a massive but compact object. Very compact.
The two images were made possible by the results of the research aimed at the ability to follow, with very high precision, the path of the stars. Research awarded with the 2020 Nobel Prize in Physics to Roger Penrose, Reinhard Genzel, and Andrea Ghez. Penrose “for the discovery that black hole formation is a robust prediction of the general theory of relativity” in his 1965 work. Genzel and Ghez “for the discovery of a supermassive compact object at the centre of our galaxy“, Sgr A*, precisely. |
Hands-on learning is an integral component in early childhood education.
The manipulation and experimenting of materials provide a reference of learned concepts, and enables young children to construct meaningful experiences that aid their ability to commit new information to memory.
For this activity, your little one learned about shipbuilding in colonial America. To tie this in to our Thanksgiving theme, we discussed how the pilgrims sailed to the United States on a boat called the Mayflower.
They also learned that they were at sea for sixty-six days and landed at Plymouth Rock in the year 1620.
Using styrofoam blocks, blocks, small boxes, and “sails”, we created our own version of the Mayflower!
We then added small people as our pilgrims to supplement our play. |
Acids and bases are everywhere. However, you are most likely to hear those terms used in chemistry. The difference between acids and bases has to do with how they ionize in water. Keep things crystal clear by breaking down the properties of acids vs. bases. See each different substance through real-world examples.
Difference Between Acids and Bases
Acids and bases play important roles in chemistry, but you can also find them around the house. Therefore, knowing the difference between the two is important for safety.
While there are several key differences in chemical properties between acids and bases, the main one is their pH level. Acids have a pH level lower than 7.0 while bases have a pH level higher than 7.0. Discover why acids and bases have a different pH level along with other important properties by looking at each substance.
What Are Acids and Bases?
How you define an acid or a base depends on three different theories in chemistry: Arrhenius Theory, Bronsted-Lowry Theory, and Lewis Theory.
Elevate the H+ in water
Elevate the H- in water
Accept pairs of electrons
Donate pairs of electrons
Acid vs. Base pH
However, for those without a science degree, it’s best to remember the pH (power or potential for hydrogen). You can test the pH of something with a pH test strip (litmus paper). Acids turn blue litmus paper red, while bases turn red litmus paper blue. This is because acids have a low pH, while bases have a high pH. You might not realize this, but pH strips are used to test the water in your pool. See? Chemistry doesn’t just happen in a lab.
Quick Breakdown of Properties of Acids and Bases
In addition to defining acids and bases, it’s important to look at their different properties.
less than 7.0
more than 7.0
Hydrogen ions in water
blue litmus paper turns red
red litmus paper turns blue
Examples of Acids and Bases
Ready for a few acid and base examples? You’ll be surprised to hear some of these are things you have in your house.
When you think of acids, you might think of solutions that can burn your flesh. However, there are all kinds of acids.
- Citric acid (oranges and lemons)
- Acetic acid (vinegar)
- Hydrochloric acid (stomach acid)
- Carbonic acids (soft drinks)
- Nitric acid
Looking for a common household base? Think baking soda. You might find these other bases as well.
- Ammonia hydroxide (ammonia water)
- Magnesium hydroxide (milk of magnesia)
- Sodium borate (borax)
- Calcium hydroxide (limewater)
Safety With Acids vs. Bases
It’s important to know the difference between acids and bases in terms of safety. Since both strong and weak acids and bases can be corrosive and severely burn skin and eyes, it’s always important to proceed with caution when handling either. Just be aware that:
- Strong acids have a pH of about 1 or less depending on concentration. They can be extremely dangerous and reactive such as the sulfuric acid used in battery acid.
- Strong bases are those with a pH of about 13 or so depending on the concentration. Strong bases include bleach.
Additionally, it’s important to know the difference between acids and bases because mixing the two together can cause a reaction. While mixing vinegar and baking soda can create a great cleaning agent, mixing strong acids and bases can create toxic fumes or even explosions.
Acids and Bases in Chemistry
It’s important to know the difference between acids and bases. This is true in chemistry and even in your own home. Don’t let your science knowledge stop at acids and bases. Keep this learning going through looking at physical and chemical weathering. Learning is fun! |
Scientists Find Previously Unknown Jumping Behavior in Insects
For Immediate Release
A team of researchers has discovered a jumping behavior that is entirely new to insect larvae, and there is evidence that it is occurring in a range of species – we just haven’t noticed it before.
The previously unrecorded behavior occurs in the larvae of a species of lined flat bark beetle (Laemophloeus biguttatus). Specifically, the larvae are able to spring into the air, with each larva curling itself into a loop as it leaps forward. What makes these leaps unique is how the larvae are able to pull it off.
“Jumping at all is exceedingly rare in the larvae of beetle species, and the mechanism they use to execute their leaps is – as far as we can tell – previously unrecorded in any insect larvae,” says Matt Bertone, corresponding author of a paper on the discovery and director of North Carolina State University’s Plant Disease and Insect Clinic.
While there are other insect species that are capable of making prodigious leaps, they rely on something called a “latch-mediated spring actuation mechanism.” This means that they essentially have two parts of their body latch onto each other while the insect exerts force, building up a significant amount of energy. The insect then unlatches the two parts, releasing all of that energy at once, allowing it to spring off the ground.
“What makes the L. biguttatus so remarkable is that it makes these leaps without latching two parts of its body together,” Bertone says. “Instead, it uses claws on its legs to grip the ground while it builds up that potential energy – and once those claws release their hold on the ground, that potential energy is converted into kinetic energy, launching it skyward.”
The discovery of the behavior was somewhat serendipitous. Bertone had collected a variety of insect samples from a rotting tree near his lab in order to photograph them when he noticed that these beetle larvae appeared to be hopping.
Bertone and paper co-author Adrian Smith then decided to film the behavior in order to get a better look at what was going on. That’s when they began to understand just how peculiar the behavior was. Smith is a research assistant professor of biological sciences at NC State and head of the Evolutionary Biology & Behavior Research Lab at the North Carolina Museum of Natural Sciences.
“The way these larvae were jumping was impressive at first, but we didn’t immediately understand how unique it was,” Bertone says. “We then shared it with a number of beetle experts around the country, and none of them had seen the jumping behavior before. That’s when we realized we needed to take a closer look at just how the larvae was doing what it was doing.”
To determine how L. biguttatus was able to execute its acrobatics, the researchers filmed the jumps at speeds of up to 60,000 frames per second. This allowed them to capture all of the external movements associated with the jumps, and suggested that the legs were essentially creating a latching mechanism with the ground.
The researchers also conducted a muscle mass assessment to determine whether it was possible for the larvae to make their leaps using just their muscles, as opposed to using a latch mechanism to store energy. They found that the larvae lacked sufficient muscle to hurl themselves into the air as far or as fast as they had been filmed jumping. Ergo, latching onto the ground was the only way the larvae could pull off their aerial feats.
Meanwhile, in an unrelated video about jumping maggots, Smith had included a short clip of the jumping behavior in L. biguttatus. That video was seen by a researcher in Japan named Takahiro Yoshida, who had witnessed similar jumps in the larvae of another beetle species called Placonotus testaceus, but had not published anything related to the behavior.
“We don’t have high-speed footage of P. testaceus, but the video evidence we do have from Yoshida’s lab suggests that this previously unknown behavior is found in two different genera which are not even closely related,” Bertone says.
“This raises a lot of questions. Has this behavior evolved separately? Is it found in other beetle species? Are these genera more closely related than we previously suspected? There’s a lot of interesting work to be done here.”
Video of the jumping behavior in L. biguttatus can be found at https://www.youtube.com/watch?v=y-b73G96UIQ.
The paper, “A Novel Power-Amplified Jumping Behavior in Larval Beetles (Coleoptera: Laemophloeidae),” is published open access in the journal PLOS ONE. The paper was co-authored by Yoshida, of Tokyo Metropolitan University; Joshua Gibson, of the University of Illinois at Urbana-Champaign; and Ainsley Seago, of the Carnegie Museum of Natural History. The work was done with partial support from the Japan Society for the Promotion of Science for Young Scientists.
Note to Editors: The study abstract follows.
“A Novel Power-Amplified Jumping Behavior in Larval Beetles (Coleoptera: Laemophloeidae)”
Authors: Matthew A. Bertone, North Carolina State University; Joshua C. Gibson, University of Illinois at Urbana-Champaign; Ainsley E. Seago, Carnegie Museum of Natural History; Takahiro Yoshida, Tokyo Metropolitan University; and Adrian A. Smith, North Carolina State University and the North Carolina Museum of Natural Sciences
Published: Jan. 19, PLOS ONE
Abstract: Larval insects use many methods for locomotion. Here we describe a previously unknown jumping behavior in a group of beetle larvae (Coleoptera: Laemophloeidae). We analyze and describe this behavior in Laemophloeus biguttatus and provide information on similar observations for another laemophloeid species, Placonotus testaceus. Laemophloeus biguttatus larvae precede jumps by arching their body while gripping the substrate with their legs over a period of 0.22 ± 0.17s. This is followed by a rapid ventral curling of the body after the larvae releases its grip that launches them into the air. Larvae reached takeoff velocities of 0.47 ± 0.15 m s-1 and traveled 11.2 ± 2.8 mm (1.98 ± 0.8 body lengths) horizontally and 7.9 ± 4.3 mm (1.5 ± 0.9 body lengths) vertically during their jumps. Conservative estimates of power output revealed that some but not all jumps can be explained by direct muscle power alone, suggesting Laemophloeus biguttatus may use a latch-mediated spring actuation mechanism (LaMSA) in which interaction between the larvae’s legs and the substrate serves as the latch. MicroCT scans and SEM imaging of larvae did not reveal any notable modifications that would aid in jumping. Although more in-depth experiments could not be performed to test hypotheses on the function of these jumps, we posit that this behavior is used for rapid locomotion which is energetically more efficient than crawling the same distance to disperse from their ephemeral habitat. We also summarize and discuss jumping behaviors among insect larvae for additional context of this behavior in laemophloeid beetles. |
Parents around the world tell their children to be careful about speaking to strangers. It’s good advice for our innocent offspring, because even if the risk of a dangerous interaction is very small, the consequences could be very great. Yet when it comes to the idea of sending unsolicited messages out into the cosmos to try to get a response from technological alien life, we seem to forget these sensible domestic rules. That goes for everything from the earliest messages beamed into space to one outlined last month by a group of researchers from the Jet Propulsion Laboratory at Caltech and several international organizations. Each could spark consequences beyond our control.
As a case in point, the first deliberate, high-powered radio message sent by humanity to another world lacked any semblance of careful thought, or any poetry. It consisted of the Morse code dots-and-dashes for three words: “MIR,” “LENIN,” and “USSR.” Mir means “peace” or “world,” the other two words need little explanation.
This cryptic transmission was beamed into the cosmos in November of 1962 from the Pluton Complex in Crimea by a planetary radar made from 16-meter-wide radio dishes mounted on a structure built out of railway bridge trusses and the hulls of a pair of repurposed Soviet submarines—a brutalist architect’s delight. The message was used to produce a radar echo from the planet Venus, but decades later scientists realized that the eventual lucky recipient of this pronouncement is a star some 2,100 light-years away, and therefore not yet aware of its role in human history.
The oddly parochial nature of this first step in extraterrestrial messaging was not without precedent, or so very different from efforts that followed it. For instance, there’s a famous but probably apocryphal anecdote about how, in 1820, the physicist Carl Fredrich Gauss proposed sending signals to other worlds by creating a vast representation of the Pythagorean theorem of right-angled triangles, carved into the Siberian tundra and outlined with pine forest. Gauss may or may not have suggested this, but having invented the heliotrope for reflecting sunlight over great distances, he did write to the German astronomer Heinrich Olbers in 1822 about building a huge heliotrope array of hundreds of mirrors to message “our neighbors on the Moon.”
In the late 1800s, the French inventor Charles Cros also thought about mirrors for signaling Mars or Venus, except his idea was to use these to focus sunlight and to burn features onto the desert zones of those worlds, possibly the most aggressive notion of extraterrestrial messaging thus far envisioned. To be fair, Cros had a special kind of creative mind, as demonstrated by his proclivity for writing poems such as The Kippered Herring, presaging the absurdist antics of Monty Python by more than a half-century.1
Should any one group of humans be speaking on behalf of the entire species?
By the 1970s, our vastly improved knowledge about the universe and our increasingly complex technology led to more sophistication. Most famously, in 1974, the Arecibo radio observatory in Puerto Rico (that could also transmit as a planetary radar) sent a demonstration message toward a globular cluster of stars about 25,000 light-years away. The act of transmission itself was a bit of a throwaway stunt; the more interesting and rigorous work went into constructing the message: a string of 1,679 bits of clearly non-random, attention-provoking data that could be displayed as an image depicting items like the decimal number system, life’s elemental constituents, the shape of DNA, and a few other tidbits about the human form, and the architecture of our solar system. In this sense the Arecibo message was mostly a Gedankenexperiment, a thought experiment in how we might “ping” an alien species to get noticed.
Most recently, in April, physicist Jonathan Jiang and his colleagues published their idea of “A Beacon in the Galaxy”: an updated binary-code message that could be transmitted toward a star cluster nearer to the Milky Way’s center, to increase the “reach” of the signal to potentially habitable systems.2 The message uses images of 128 by 128 pixels—compact enough to transmit efficiently, but large enough to depict a wealth of information, from our location details to mathematical proofs, biological data, particle physics, and so on.
To fend off concerns about talking to strangers, these scientists suggest that it’s likely for long-lived, technological intelligences to have figured out how to be cooperative and peace-loving, or else they wouldn’t have survived. This rather hopeful sentiment echoes others across the years. However, it’s rooted in our experience as a parochial, planet-bound species. In our world of finite resources, the strategy of cooperation, and even altruism, is indeed often successful—and is seen across individual genetic lineages and between species. But it’s not at all clear that this can apply to life elsewhere.
In truth, all of these laissez-faire approaches to messaging extraterrestrial intelligence, or METI, minimize what could be serious risk factors. The first is whether any one group of humans should be speaking on behalf of the entire species, and the second is whether METI is actually fundamentally dangerous.
That latter risk is the most difficult to assess, in the same way that the possibility of any kind of extraterrestrial communication is incredibly difficult to evaluate because of the array of assumptions that have to be made. Will, for example, intelligences across the universe have any properties in common? Will the pieces of information that we think are universal and recognizable mean anything at all to aliens? We might imagine, like with Pythagoras’ Theorem or Arecibo’s list of atoms and DNA, that some qualities of the world will be discovered by all species technologically capable of hearing our messages. But it’s hardly a given, and what appear to us as dry facts about the natural world could represent wholly different things to an alien, for whom a geometric rule might connote an act of aggression, or a molecular structure might represent a holy relic.
The late Stephen Hawking felt that the biggest concern is not in appearing aggressive, but in attracting aggression. He questioned whether making our presence known isn’t just an invitation for any roaming interstellar species on the lookout for resources to show up ready to consume and conquer. A 2017 paper by Peter Todd and Geoffrey Miller analyzed the possible evolution of extraterrestrial psychologies and concluded3 that sending out hopeful messages to ETs is basically announcing that you live somewhere without competition, in effect saying, “Here is a delectable treat—a home-world with valuable and easy-to-acquire resources, lightly guarded by a gullible young species.”
Ideas can be dangerous, especially meme-like ideas that spread contagiously.
That sentiment can seem a little off-base though. After all, resources like water and metals are abundant in the cosmos. You’d hardly need to raid an inconvenient little planet for any of these things. Maybe though there are more precious, but harder to quantify resources in planetary habitability, and in the wealth of novelty produced by billions of years of natural selection and evolution—these things could be worth traipsing between the stars for.
For others, like the science-fiction author Liu Cixin writing in his trilogy The Three-Body Problem, the danger comes from the “Dark Forest Hypothesis.” This posits that all species want to survive, and if they succeed, they will continuously expand. But our galaxy has finite usable spaces and resources, and it is impossible to obtain real-time knowledge about other species due to the limits of the speed of light. Everyone is intent on their own survival and must assume that by the time they learn of aliens, those aliens may have already developed more capable technology, and greater ambitions. Consequently, the most logical and prudent course of action is to remain silent and try to eliminate any other species that you learn of, before they eliminate you, or before they evolve to a level where they can eliminate you. In other words, keep your head down in the forest, or else.
These are all pretty grim assessments. But there are also hazards from the mere act of communication. In 2014 I wrote in Nautilus about the potential for transmitted information to destabilize and perhaps even destroy a civilization—whether intentionally or unintentionally. Ideas can be dangerous, especially meme-like ideas that spread contagiously and could, in principle, be “weaponized” to destroy just as effectively as a massive fleet of spacecraft.
Nonetheless, the astronomer and SETI expert Jason Wright has pointed out that we tend to fall prey to a “monocultural fallacy” in thinking about alien civilizations when we assume that decisions and actions will be made as one. Clearly that is not true for humans, and messages have gone barreling out into the cosmos because a few people have decided to send them and the rest of our species simply hasn’t cared to notice. If there is any characteristic that is most likely to apply to alien species it may well be this same one; whoever messages us, or hears us, could easily be a few specialists, or obsessives, who may or may not have influence over their civilization.
Perhaps though, in the end, the most decisive factor lies in the stratification of time due to light’s finite speed. This means that all decisions to do with ETs (whoever’s ETs those are) will be based on extrapolation from information that gets more and more outdated with distance. You may have just picked up the “Hello World” (or “MIR”) signal from a distant star, but by now that species could be a bomb-wielding interstellar terror or have been rendered harmless by extinction. If you return a message you simply don’t know what state the recipient will be in when it receives that signal, or what state you will be in if you get a conversational reply. That isn’t just a problem for distances of hundreds of light-years, even a few light-years makes a difference—just consider our own changing conditions between the middle of 2019 and 2022, or between 1938 and 1945.
Just like our parental admonition to never talk to strangers, we should think carefully about talking to species when we can’t talk to them in any temporally fixed way, or know how our future selves will deal with this. The Soviet researchers who sent “MIR,” “LENIN,” and “USSR” off into the galaxy couldn’t have known what was coming down the line for their way of life. Perhaps the real risk of messaging extraterrestrials is that no one will ever get to have the conversation they’d like.
Caleb Scharf is the director of astrobiology at Columbia University. His latest book is The Ascent of Information: Books, Bits, Genes, Machines, and Life’s Unending Algorithm. Follow him on Twitter @caleb_scharf.
Lead image: Rytis Bernotas / Shutterstock
1. Bates, S. Revolutionary nonsense: Charles Cros’s Kippered Herring. The French Review 57, 601-606 (1984).
2. Jiang, J.H., et al. A beacon in the galaxy: Updated Arecibo message for potential FAST and SETI projects. arXiv (2022). Retrieved from DOI: 2203.0428
3. Todd, P.M. & Miller, G.F. The evolutionary psychology of extraterrestrial intelligence: Are there universal adaptations in search, aversion, and signaling? Biological Theory 13, 131-141 (2018). |
Plant & Food Research’s Dr David Stevenson explains what free radicals are and how they are produced. He outlines the role of cellular structures called mitochondria in the production of free radicals. He also describes the negative and positive effects of free radicals. The term ‘antioxidant’ is defined and common examples given.
Point of interest
One of the problems that David faces in his research is that some phytochemicals show antioxidant activity in cells outside of the body but not necessarily in cells inside the body. A large number of health foods claim to have antioxidant activity, but is there evidence to show that the activity is in the living body (in vivo) or in a cell line outside the body (in vitro)?
DR DAVID STEVENSON
Free radicals are essentially molecules with broken bonds, so they are highly reactive and will react with anything that they come into contact with. They are mainly produced in the body as a side-effect of generating energy. Our cells generate energy through little particles within them called mitochondria – they are sort of known as the powerhouses of the cell
Mitochondria carry out a controlled oxidation process of food – effectively the same as burning, but it’s much more controlled. In the process, they generate energy and they also reduce the oxygen to water, and the first stage of that is to convert the oxygen into a free radical called superoxide. Some of the superoxide doesn’t get further reduced, so it drifts away from the enzyme that produces it and can cause damage to proteins or DNA
An antioxidant – it’s more of a chemical definition than a biological one – it’s a compound which… you put it into a test where there is another compound that generates free radicals, it will neutralise those free radicals, and the less antioxidant you need to put into the test to get a response, the better it is.
The best known antioxidants that people would be familiar with would be vitamins C and E. They are made by plants. The type of antioxidant we would get a lot of in our diet will be the polyphenols, because when you put those into the chemical antioxidant assay, a lot of them work very well.
The term we use for excess free radical production is ‘oxidative stress’, so that’s when there’s so many free radicals produced that they leak out of the mitochondria and go around and cause a lot of damage. Now that might happen if, say, an unfit person takes up running and does far too much of it in one go.
We definitely don’t want to get rid of free radicals completely from the body because they are signalling molecules. The production of free radicals when we exercise sends a signal to various systems like the muscles, the mitochondria, the lungs, the arteries and everything to sort of up their game, and it tunes them up so that they will be able to function better next time we exercise. |
The historical study of war begins with military history: battles and wars, generals and troops, tactics and strategy. Historians recognize that wars have been waged for many reasons, however, including dynastic ambition, religious sectarianism, and political ideology. To understand how war works, a broad range of methods must be brought into play. By looking at political history, we can see how domestic conflicts and constitutional debates have shaped the ways in which wars were fought, and explore the consequences—territorial, political, institutional—of victory and defeat. Social historians might study the everyday experiences of rank-and-file soldiers, or consider how life changed for the families whom soldiers left behind. Alternately, they might examine antiwar and resistance movements, or the ways in which ordinary people coped with the horrors of extreme violence. Cultural historians might consider war as a subject of epic poetry, triumphant sculpture, or martial music. They might also look at popular cultures of war, or at how new forms of communication (books, posters, films) have often permitted new varieties of propaganda. Historians of technology examine not only shifts in weaponry, from the spear to the drone, but also other transformations in material culture (canned food) and communication methods (the telegraph, social media). Yale historians study and teach about the causes, nature, and consequences of warfare in all corners of the world, from antiquity to the present. |
Lesson Plan of Expressions in Conversation-1
Students` Learning Outcomes
- Use appropriate expressions in conversation to:
- Express and respond to opinion.
- Offer and respect apology
Information for Teachers
- Study class 3 and 4 related plans for better understanding and progression of the SLO.
- Opinion is a judgment about someone or something/a thought or belief about something or someone.
- Apology means to admit one`s mistake and say ‘sorry’.
- Note: keep the charts displayed in the class for some months.
Material / resources
Writing board, chalk/marker, charts, textbook
- Write the following on the board and ask the class for their answers.
Explain that the answers/responses could be different opinions (go over the meaning of opinion once again). All of us can`t have the same opinions openly.
Explain the following table to the students. Divide the class into groups of five. Each student must get a turn to pick any one way of asking the opinion from the table given. The students must ask about which thing he/she wants the opinion. Any other student from the group can pick a way of responding by adding his/her point of view about that.
Then the next student will ask the opinion about something else and any other student will reply. All the students should get a chance to both ask and answer in the group
Sum up/ Conclusion
- Ask how the words thank you, please, and sorry make our conversation polite?
- Ask about the new expressions they learnt through this lesson.
- Ask students: as;
- Which expression could they use for accepting apology?Quote two to three examples.
- Which expressions could be used for giving opinions in favor of the topic?Give two to three examples.
- Which expressions could be used for giving opinions against the topic? Give two to three examples.
- Find the exercise related to the topic in the textbook. Students must do this exercise in the notebooks or on the textbook.
- Give students a situation about any topic of your choice or the ones given below: as;
- Your opinion about:
- Importance of Cleanliness
- Importance of Education
- Growing more trees
- Ask them to write five sentences giving opinions using any five words and phrases given below: as;
- I am sorry to say but…….
- It is sad but…………….
- I think it is……………
- I think it is………………….
- We all know how it is, however,
- I feel…….
- I would like to…………
- If you ask me
- I would love to say that…….
- Students will practice these common rules of courtesy with their friends, teachers and family.
- (Note:Family members do not need to be literate for this, but the students` practice will improve their own communication skills).
- Ask them to write at least 10 sentences about how to receive and accept apology the students have learnt and discussed in class. |
A printable version of this information leaflet for parents and carers who have English as an additional language can be found in our download section.
What rights does my child have?
The law in the UK gives children rights which parents and schools have the responsibility to maintain.
- the right to a free school place between the ages of 5 and 16.
- the right to be in a safe envioronment where they can learn and to be protected from harm.
- Schools must help if they think a child is being harmed, abused or not looked after.
What does the law say about attending school?
From the age of 5, education is compulsory in the UK and your child must attend every day and on time unless they are ill. Taking your child on holiday during term time is not allowed. If you need to take your child out of school speak to the headteacher.
Will my child be treated differently to other children?
By law schools must promote equality for everyone and encourage good relations between different groups of people, this is part of the Equality Act 2010.
The UK has also signed the UN Convention on the Rights of the Child which says everyone under 18 has the right to:
- Express their opinions and be listened to
- Freedom of thought, belief and religion.
- An education that enables children to fulfil their potential.
- Learn and use the language, customs and religion of their family even if they are different from the country where they live.
- Have the same rights as all other children in that country if they are refugees or seeking refuge.
How will I know what is happening in school?
Schools try to let parents know what is happening in different ways. The website is a good place to start. If you need information provided in a particular way, ask at the school office.
Will my child be taught about religion?
This will vary depending on the school, but they will have collective worship (sometimes called assembly) and Religious Education (RE) lessons. If you do not want your child take part, talk to the headteacher.
What can I do to help my child in school?
Parents and carers are welcome to get involved in school life this could be through:
- Attending parents’ meetings or consultations.
- Joining in celebrations, concerts, and assemblies
- Volunteering, for example helping with activities or school visits
- Joining the Parent Teacher Association.
- Becoming a Governor
What should I do if my child is unhappy?
If your child is unhappy, you need to tell the staff so they can help. Contact the school office or class teacher and ask for an interpreter if required.
How can I help my child learn?
There are lots of things you can do to help your child:
- Ask questions and talk about the topics studied in your first language.
- Use a bilingual dictionary or a translation app.
- Encourage them to write down any new words in English or first language to help them remember them.
- If they do not understand something encourage them to ask the teacher to explain it again
- Use pictures and objects to explain things.
- Play games which include learning.
- Use real life situations such as shopping trips.
- Give children time and praise them.
What homework will my child get?
Most schools give children work to do at home this will be different for different ages. It is important to help and encourage your child to complete their homework.
Why do I need to share books with my child?
Reading at home is important. You can do this in your first language, talking about the pictures and what is happening in the story. The important thing is to enjoy reading together.
Will improving my English help my child?
There are many advantages to learning English as your child starts to learn it in school. It will help you to understand your child’s learning. You can also learn alongside your child and practice speaking, reading and writing English together. Improving your English will help you to communicate with staff and parents at your child’s school.
How can I improve my English?
If you want to improve your English your local college might offer classes for adults. There are also websites which will help.
- ESOL Nexus is a free website for people living in the UK to improve English and understand more about UK life and work
- Learn English is a free website with games, stories, listening activities and grammar exercises
- BBC Learning English is a free website where you can practice and improve your English. |
Alliteration is derived from the Latin phrase. It means "letters of the alphabet." It is a stylistic device in which several words, having the same first consonant sound, appear together in a series.
Consider the following examples:
But better butter makes a better dough
A big bully hits a baby
Both sentences they are alliterative because the same first letter of the words (B) appears very close together and produces alliteration in the sentence. An important point to remember here is that alliteration does not depend on letters but on sounds, so the phrase not knotty is alliterative, but cigarette chasing is not.
Common examples of alliteration
In our daily lives, we notice alliteration in names of different companies. memorize. Here are several examples of common alliterations. characters and real people can stand prominently in their minds because of the effects of their alliterative names.Examples are:
Alliteration Examples Literature
Example # 1
From Samuel Taylor Coleridge, "Rime of the Ancient Mariner "
" The gentle breeze blew, the white foam flew,
The furrow was still free;
We were the first to explode
In that silence nt be. "
In the previous lines we see alliterations (" b "," f "and" s ") in the phrases" breeze blew "," flew foam "," furrow followed "and" silent sea ".
Example # 2
From" The Dead ”by James Joyce
“ His soul fainted slowly when he heard the snow fall faintly across the universe and fall faintly, like the descent of its last end, on all the living and the dead. ”
We observe several instances of alliteration in the aforementioned prose work by James Joyce. The alliterations are with "s" and "f" in the phrases "fainted slowly" and "falling weakly" .
Example # 3
From "I know why the caged bird sings" by Maya Angelou
"Up the hall, the moans and screams merged with the foul smell of black woolen clothing worn in summer weather and withered green leaves on yellow flowers. ”
Maya gives us a striking example of alliteration in the above excerpt with the letters 's' and "w". We note that alliterative words are interrupted by other non-alliterative words between them, but the effect of alliteration remains the same. We immediately notice the alliteration in the words "screams", "foul smell", "summer", "weather" and "wilting". ".
Example # 4
From William Shakespeare's" Romeo and Juliet "(Act 1 prologue)
" From the fatal loins of these two enemies;
A pair of star-crossed lovers take their own lives. "
This is an example of alliteration with the" f "and" l "in words" forward, fatal, enemies "and" loins, lovers and life. "
Example # 5
Percy Bysshe Shelley (English Romantic Poet) "The Witch of the Atlas" is a famous poem that is full of examples of alliterations. Of them are "wings of winds" (line 175), "sick soul to happy sleep" 178), "crystalline silence cells" (line 156), "Wisdom's wizard.wind.will" (lines 195-197), "Drained and dried" (line 227), "light lines" (line 245), " green and resplendent ”(line 356) and crisp cloudscape” (lines 482-3).
Function of alliteration
Alliteration plays a very important role in poetry and prose. It creates a musical effect in the text that increases the joy of reading a literary piece. It makes reading and reciting the poems attractive and engaging. This makes it easier to memorize them. It also adds flow and beauty to a piece of script.
In the marketing industry, as we discussed earlier, alliteration makes brand names interesting and easier to remember. This literary medium is helpful in customer acquisition and sales.
Popular Literary Devices
- Ad Hominem
- Deus Ex Machina
- Double Entendre
- Flash Forward
- Half Rhyme
- Internal Rhyme
- Line Break
- Non Sequitur
- Pathetic Fallacy
- Poetic Justice
- Point of View
- Red Herring
- Tragic Flaw |
As you may know, a black hole is an object with a very destructive force of gravity. Because black holes can swallow even a nearby star. It’s also hard to believe. Because the idea of a large star being swallowed by a black hole is unimaginably amazing. It’s like science fiction.
But astronomers say that during a tindal disruption, stars can form in a black hole. Tindal disruption is when a star is super close to a large black hole. There, the strong gravitational pull of the black hole causes the star’s mass to be pulled into the black hole. It is uniquely known as spaghettification. In that process, a black hole with a strong gravitational pull pulls the star’s mass vertically as it shrinks horizontally. It’s like spaghetti.
Mr. Hawking also said that if an astronaut crossed a black hole and entered, the astronaut would be dragged and fall into the black hole. Scientists have been able to see how a star falls into a supermassive black hole. A rare emission of light from a dying star trapped in a supermassive black hole has been captured by an Earth telescope.
Investigators have been monitoring the incident, known as AT2019qiz, for six months. There, telescopes saw a beam of light suddenly glow and fade away. The observations were made using visible ultraviolet X-rays and wavelengths. The researchers used a very large telescope from the European Space Agency and a state-of-the-art telescope.
It was very difficult to see something like this before. This is because a black hole rapidly releases dust-like matter into space, blocking our orbit when swallowing stars. But this newly studied event was seen shortly before the star was torn to pieces by a black hole. This happened about 250 million light-years from Earth. Therefore, this is considered to be the closest tindal disruption to the Earth.
The nearest star, Alpha Centauri, is about four light-years away. A light-year is the distance light travels in a year. It is approximately 10 trillion kilometers. Therefore, it appears that it takes four years to reach the nearest star, Alpha Centauri, even though light travels fast. We are not even close to the speed of light. So we can’t go near these black holes.
Scientists have observed that the star associated with the swallowing of this black hole is as large as our Sun. Scientists have witnessed the destruction of a star in the middle of black hole millions of times the mass of our Sun. This special event, recorded on a telescope, is a supermassive black hole that will no doubt be of great use to scientists looking to study the behavior of nearby matter in the future.
Did you find this article useful? Leave your comments below. |
Dementia pertains to a complex medical disorder that affects the cognitive function of an individual and is thus characterized by forgetfulness and memory loss. For several decades, dementia was considered as an exclusive condition of the elderly; however, the number of cases of dementia has recently increased around the world.
In addition, majority of the cases of dementia, roughly 90%, have been associated with Alzheimer’s disease, which is a neurodegenerative disorder that commonly affects the elderly. Forgetfulness can be quite devastating not only to an affected individual, but also to his or her loved ones. Severe forms of memory loss may involve the inability to remember names, birthdays, phone numbers, and even places of residence.
Major Risk Factors of Dementia
The major risk factors associated with dementia and forgetfulness include advanced age, low level of education, stroke, diabetes, and genetics. However, a recent report has shown that the feeling of loneliness may also increase the risk of dementia in certain individuals. According to Dr. Holwerda, loneliness involves the excessive use of two regions of the brain, the hypothalamus and the pituitary gland, which are also the same regions associated with dementia. The association between loneliness and forgetfulness may therefore be mainly based on the tissue damages that have accumulated in these brain regions.
Recent Medical Report Dementia and Loneliness
According to the recent medical report, the concept of loneliness in relation to forgetfulness is not simply the act of living alone or in isolation. In their report, loneliness is defined as the lack of social interaction or engagement and thus, an individual living with a big family may also feel lonely if he or she does not interact well with co-inhabitants. At the same time, it may also be possible for people living alone to engage in social interactions outside the home and thus, these individuals do not feel any form of isolation and loneliness.
RELATED READING: 10 Ways to Rejuvenate Your Brain at Any Age!
The researchers of the study describe loneliness as a behavioral response of being accepted by the family, spouse, friends, or co-workers. It is actually a natural response for a person to feel distressed if the people they are interacting with are dissatisfied with their company. Loneliness may thus lead to insufficient stimulation of the brain, facilitating the development of dementia. The report has also emphasized that loneliness may be defined in terms of quantity and quality of interaction with other people. A lack of social interactions may thus impart a feeling of loneliness in an individual, as well as a negative outcome from an interaction with another individual.
What Lifestyle Means for Aging and Dementia as a Whole
The report has also pointed out that the current lifestyles around the world have also influenced the incidence of loneliness and indirectly, forgetfulness. The rapidly increasing size of the aging population has resulted in the establishment of elderly care facilities, in which aging parents are placed under the care of healthcare personnel. In other cases, the elderly continue to live in their homes and through time, live alone when one spouse passes away. Adult children may also choose to move out of the city, state, or country, and thus elderly parents tend to live far from their loved ones.
If You Feel Lonely…and Experience Forgetfulness
The researchers of the study carefully explained that the feeling of being lonely, and not being alone, has been associated with a higher chance of dementia or memory loss among the elderly. In addition, the researchers also emphasized that this specific risk factor for forgetfulness is independent of other conditions such as diabetes and cardiovascular diseases. This report may thus help physicians and other healthcare personnel in understanding the connection between forgetfulness and loneliness and identify elderly individuals who are vulnerable to this medical condition. It may also be possible to design intervention programs for the elderly that would improve their quality of life during ageing and possibly prevent memory loss. |
Mastodons were large, proboscidean mammal species of the extinct genus Mammut that inhabited North and Central America during the late Miocene or late Pliocene up to their extinction at the end of the Pleistocene 11,000 years ago. The American mastodon is the most recent and best-known species of the genus.
While mastodons had a size and appearance similar to elephants and mammoths, they were not particularly closely related. Their teeth differ dramatically from those of members of the elephant family; they had blunt, conical, nipple-like projections on the crowns of their molars, which were more suited to chewing leaves than the high-crowned teeth mammoths used for grazing; the name mastodon (or mastodont) means “nipple teeth” and is also an obsolete name for their genus. Their skulls are larger and flatter than those of mammoths, while their skeleton is stockier and more robust.
The American mastodon (Mammut americanum), the most recent member of the genus, lived from about 3.7 million years ago until it became extinct about 10,000 years BCE. It is known from fossils found ranging from present-day Alaska and New England in the north, to Florida, southern California, and as far south as Honduras.
The American mastodon resembled a woolly mammoth in appearance, with a thick coat of shaggy hair. It had tusks that sometimes exceeded five meters in length; they curved upwards, but less dramatically than those of the woolly mammoth. Its main habitat was cold spruce woodlands, and it is believed to have browsed in herds.
They are generally reported as having disappeared from North America about 12,700 years ago, as part of a mass extinction of most of the Pleistocene megafauna, widely presumed to have been as a result of rapid climate change in North America, as well as the sophistication of stone tool weaponry used by the Clovis hunters. The latest Paleo-Indians entered the American continent and expanded to relatively large numbers 13,000 years ago, and their hunting may have caused a gradual attrition of the mastodon population. |
A healthy human brain begins its life from a starting point on an individual human being’s genetic map and is influenced by the unique, multi-layered environment into which it is born. It is programmed to learn; to soak up the world around it; to make sense of it all; and, to find its own place and personality. As it drives the physical growth of the body, it is reaching further out into the world, giving it access to more sensory treasures. It is a relentless effort to learn about the world and gain influence over it.
The human brain learns through a remarkable process of gathering information from an environment that includes the body in which it resides; the support it receives from the people who provide care and nourishment; and from the infinite but unique physical universe around it.
No two brains are the same and each is born at a unique juncture of the dimensions of space, time, and energy. How it functions, physiologically is a process of collecting data from the stimuli in the universe, through all its sensory apparatus, and forming connections and pathways along its neural network. While science has learned much about how a brain forms those connections and pathways along its neural network, there is much more to learn. The brain remains one of the great mysteries of the universe.
The world’s scientists seem to agree about how we describe this process, based upon their collective observations, and have identified key developmental milestones that are common to every brain, whatever the time frame in which the milestones are reached—a lesson the drafters of academic standards would do well to learn. We must remember, however, that the observations of these scientists only describe the brain’s function, they do not define it. The brain, we might, say is its own architect. The brain functions at its own pace and rhythm within a world of incessant change.
Educators who assume responsibility for teaching the child in whom the brain resides must remind themselves they have no control over what the brain may have experienced before we became involved. We must begin our work at the unique point where we find it on its developmental path. It is not ours to command.
Whatever he or she has endured, the brain’s motivation to learn is intrinsic. If environmental factors impede the brain’s growth and development, at any point along the way, there is a price to be paid but the brain is, also, a remarkably resilient entity that can learn almost anything. We have seen how people, even at an advanced age, can recover from debilitating strokes and some injuries. The brain is, at once, fragile and virile. It helps if we remember the brain does not unlearn things rather it keeps making new connections, gradually building on and/or replacing what was known before. Thus it is never too late to start anew.
The more stress and trauma a young brain may have endured, however, the more it needs our patient time, love, attention, and protection. Any challenges the child presents to his or her teachers reflect life experiences over which that child has had no control. Our purpose is to neither label nor pass judgment; neither should we keep score or assign grades. Our mission is to help the brain move down its development path and help the child become the best version of him or herself. Even after periods of deprivation the brain is ready to learn, again. As it learns, the pace of learning accelerates.
We must never give up on a child’s potential to learn, to catch up when they are behind, or to create something of value to the world; with a little help from us. |
In the 1960s, Costa Rica had one of the highest population growth rates in the world at almost 4 percent. This caused major concern among demographers. Through changes in policy and education, the rate has steadily dropped until today it is slightly below 1 percent, less than replacement level.
On another front, Costa Rica has similarly achieved a remarkable turnaround. In the 1940s, 75 percent of the country was covered in rainforest, cloud forest, and mangrove. Over the next 40 years, more than half of all trees were logged; the country had the highest deforestation rate in the American hemisphere in the ’70s and ‘’80s. Starting in the 1990s, a forest conservation and restoration program was initiated based on the strategy of valuing forests by paying for their services, known as Payment for Environmental Services (PES). By harnessing the forces of economics, PES establishes the forest essentially as a utility company with parties who use the resources and services of the forest, mostly companies, paying for what and how they use it.
Over the past two decades, the program has become the most successful forest management model on earth. Most of the lost forests have been replanted and regrown, and remaining forests have been conserved. Costa Rica is a global biodiversity hotspot, where an estimated 5-6 percent of all known species can be found, remarkably on only 0.03 percent of the earth’s surface. Half a million documented plant, animal, and insect species, including iconic ones like the sloth and great green macaw, are found in this small country. One third of Costa Rica’s land is national park or national reserve, created during the first phase of PES — the conservation phase.
PES focuses on four ecosystem services: carbon sequestration, hydrological services, sustainable biodiversity management, and conservation of natural landscapes for tourism. Ecotourism has become Costa Rica’s number one business, featuring pristine beaches, volcanoes, wetlands, caves, rainforests, mountains, rivers, and waterfalls. The PES program goes beyond preserving the environment and building the economy. It also builds equity by directly empowering minority groups—Indigenous peoples and women. It promotes employment while simultaneously building cultural and natural capital.
Over the years, PES has evolved from protection and restoration, the conservation phase, to today’s ecosystem phase, where the focus is on integration of ecosystem services — looking at connectivity, resilience, biodiversity, and climate change. A strong focus of PES is on privately owned land, where it is often challenging to change behavior and outcomes.
Placing a monetary value on the natural world may seem unsavory and perhaps unethical to some, but it incentivizes people to protect and rebuild the environment. For too long, the planet’s irreplaceable natural services have been treated as freebees by businesses and landowners. Fortunately, because of its stellar success, Cost Rica’s economic approach to protecting and rebuilding nature’s systems is being adopted by a number of other nations and is getting increasing traction.
Every day, the staff of the Santa Barbara Independent works hard to sort out truth from rumor and keep you informed of what’s happening across the entire Santa Barbara community. Now there’s a way to directly enable these efforts. Support the Independent by making a direct contribution or with a subscription to Indy+. |
This article needs additional citations for verification. (December 2020)
|Original author(s)||Dennis Ritchie|
(AT&T Bell Laboratories)
|Developer(s)||Various open-source and commercial developers|
|Initial release||June 12, 1972|
|Operating system||Unix, Unix-like, Plan 9, Microsoft Windows|
In Unix, Plan 9, and Unix-like operating systems, the
strip program removes information from executable binary programs and object files that is not essential or required for normal and correct execution, thus potentially resulting in better performance and sometimes significantly less disk space usage.
The information removed may consist of debugging and symbol information; however, the standard leaves the scope of the changes to the binary up to the implementer of the stripping program.
Furthermore, the use of
strip can improve the security of the binary against reverse engineering as it would be comparatively more difficult to analyze a binary without the extra information that would otherwise be removed.
- Stripped binary
- Executable compression
- List of Unix commands
- Strings (Unix)
- Debug symbol
- Symbol table
- "strip", The Single UNIX Specification, Version 2, The Open Group, 1997
|The Wikibook Guide to Unix has a page on the topic of: Commands| |
When most of us hear the word “arsenic”, we immediately think “poison” – and for good reason.
Arsenic is a naturally occurring semi-metal element widely distributed in the earth’s crust. It is found in rocks, soil, water, air, plants, and animals.
Natural activities such as volcanic action, erosion of rocks, and forest fires can release arsenic into the environment.
Unfortunately, it also comes from human activity:
Approximately 90 percent of industrial arsenic in the U.S. is currently used as a wood preservative, but arsenic is also used in paints, dyes, metals, drugs, soaps and semi-conductors. High arsenic levels can also come from certain fertilizers and animal feeding operations. Industry practices such as copper smelting, mining and coal burning also contribute to arsenic in our environment. (source)
In the environment, arsenic is combined with oxygen, chlorine, and sulfur to form inorganic arsenic compounds. Arsenic in animals and plants combines with carbon and hydrogen to form organic arsenic compounds.
Arsenic is odorless and tasteless, and it enters drinking water supplies from natural deposits in the earth or from agricultural and industrial practices.
Higher levels of arsenic tend to be found more in ground water sources than in surface water sources (lakes and rivers) of drinking water. The demand on ground water from municipal systems and private drinking water wells may cause water levels to drop and release arsenic from rock formations.
According to the EPA, certain regions of the U.S. have higher levels of arsenic:
Compared to the rest of the United States, western states have more systems with arsenic levels greater than EPA’s standard of 10 parts per billion (ppb). Parts of the Midwest and New England have some systems whose current arsenic levels are greater than 10 ppb, but more systems with arsenic levels that range from 2-10 ppb. While many systems may not have detected arsenic in their drinking water above 10 ppb, there may be geographic “hot spots” with systems that may have higher levels of arsenic than the predicted occurrence for that area.
In their report titled How politics derailed EPA science on arsenic, endangering public health, The Center for Public Integrity covered the political motivation and interference with improved drinking water standards.
From that report:
Urine samples collected by the Centers for Disease Control and Prevention from volunteers reveal that most Americans regularly consume small amounts of arsenic. It’s not just in water; it’s also in some of the foods we eat and beverages we drink, such as rice, fruit juice, beer and wine.
The EPA has been prepared to say since 2008, based on its review of independent science, that arsenic is 17 times more potent as a carcinogen than the agency now reports. Women are especially vulnerable. Agency scientists calculated that if 100,000 women consumed the legal limit of arsenic every day, 730 of them would eventually get bladder or lung cancer from it.
After years of research and delays, the EPA was on the verge of making its findings official by 2012. Once the science was complete, the agency could review the drinking water standard.
The EPA was preparing to make their findings public in 2012, but an investigation by the Center for Public Integrity found that one member of Congress blocked the release of those findings and in turn, any new regulations:
Mining companies and rice producers, which could be hurt by the EPA’s findings, lobbied against them. But some of the most aggressive lobbying came from two pesticide companies that sell a weed killer containing arsenic.
The EPA had reached an agreement with those companies to ban most uses of their herbicide by the end of last year. But the agreement was conditioned on the EPA’s completing its scientific review. The delay by Congress caused the EPA to suspend its ban. The weed killer, called MSMA, remains on the market.
Turning to a powerful lawmaker for help is one tactic in an arsenal used by industry to virtually paralyze EPA scientists who evaluate toxic chemicals.
The following graph shows how many people out of 100,000 would eventually get cancer if they consumed the current EPA drinking water limit every day for these carcinogens. The risks from arsenic exposure are shockingly high compared to other known toxins:
Cancer isn’t the only health risk arsenic poses:
The immediate symptoms of acute arsenic poisoning include vomiting, abdominal pain and diarrhea. These are followed by numbness and tingling of the extremities, muscle cramping and death, in extreme cases.
The first symptoms of long-term exposure to high levels of inorganic arsenic (e.g. through drinking-water and food) are usually observed in the skin, and include pigmentation changes, skin lesions and hard patches on the palms and soles of the feet (hyperkeratosis). These occur after a minimum exposure of approximately five years and may be a precursor to skin cancer.
Other adverse health effects that may be associated with long-term ingestion of inorganic arsenic include developmental effects, neurotoxicity, diabetes and cardiovascular disease. (source)
Clearly, the government will continue to interfere with the implementation of higher standards for water quality.
To avoid or minimize exposure, there are things that we can do:
1. Limit rice consumption: Studies have shown that all rice, organic and conventional, has a high level of naturally occurring arsenic and can also be contaminated by arsenic-containing pesticides. Limit or avoid using brown rice syrup as a sweetener. Avoid consumption of rice milk: Consumer Reports tested two common brands for arsenic and found that all samples exceeded EPA’s drinking water limit of 10 parts per billion. The range in rice milk was 17 to 70 parts per billion.
2. Check your drinking water: To find out if your drinking water contains arsenic, check EWG’s Tap Water Database. If you drink well water, contact your local health department to get it tested.
If your water does contain arsenic, stop drinking it. Switch to reverse osmosis water instead – this can either be purchased in bottles or you can prepare it at home by using a reverse osmosis system. Simple water filters will NOT remove arsenic from water. If you opt to use a home reverse osmosis system, continue to check your water for arsenic periodically.
3. Limit fruit juice consumption: Arsenic-based pesticides were used on fruit orchards in the early 1900s, and soil contamination remains an ongoing source of arsenic in tree fruits and grapes. Testing shows that some samples of apple, grape, and pear juices and juice blends have moderate amounts of arsenic.
4. Buy organic chicken. Arsenic is sometimes administered in conventional chicken farming to promote growth, add pigment to flesh, and prevent disease among chickens kept in close quarters. The arsenic gets into the meat and also in the water supply near these industrial farms, which means there is more arsenic in your food and in the environment. The use of arsenic is prohibited in organic farming, so buying organic chicken helps reduce your exposure. (source)
5. Test the soil in your yard: If you have a deck, playset or garden beds made from pressure-treated wood sold before 2004, the soil in your yard could be contaminated with arsenic. Pressure-treated wood sold for home use prior to 2004 contained chromated copper arsenate, or CCA, which prevented insect damage and rot. It’s been banned since then because it was found that over time it released toxic arsenic into soil and even children’s hands as they played on the structures. (source) Arsenic soil test kits can be used to see if your yard is contaminated.
Don’t count on the government to protect you and your family from arsenic (or any other) toxic substances. Political interests and corruption interfere with research and the development of truly safe standards.
Become an insider!
Sign up for the free Freedom Outpost email newsletter, and we'll make sure to keep you in the loop. |
According to the Centers for Disease Control, (CDC), there were 3 million adults and 470,000 children with epilepsy in the United States in 2015. A diagnosis of this condition can be frightening, mostly because of misinformation about it. Epilepsy is often just considered a disease of seizures that are uncontrolled. People fear getting it, or being around people who have it because they could have a seizure at any time. After reading this article, you should have an understanding of what it is and is not, and hopefully, epilepsy will not seem so scary.
Information. Epilepsy is a disorder of the central nervous system in which the cells of the brain send abnormal signals to each other. This abnormal signaling causes odd behaviors, sudden changes in emotions, and seizures. Anyone can get epilepsy; it occurs equally among men and women and all races. Although it can occur at any age, most epileptics are diagnosed during childhood or after age 65. |
The Anatomy of a Revolution
How do ideas change society?
Standard 10.2 Students compare and contrast the Glorious Revolution of England, the American Revolution, and the French Revolution and their enduring effects worldwide on the political expectations of self-government and individual liberty.
The American Revolution and the French Revolution were sparked by new ideals of freedom, equality, and popular sovereignty that were first expressed by the philosophes of the Enlightenment era which came to fuel the cries for independence and revolt that would be heard around the world for the next two hundred years. The lessons on the revolutions follow lessons on the Enlightenment, the Scientific Revolution, and the Anatomy of a Revolution. In the study at hand, the students will be given the opportunity to develop an understanding of the concept “revolution” as they explore the conditions that lead to revolutions in various world nations and compare the course that those revolutions took.Monday / Tuesday / Wednesday / Thursday / Friday
Anatomy of a Revolution PPT Notes / 20B / 21A HW#1 DUE
French Revolution Video Questions
Project Assigned / 22B / 23A HW #2 DUE
French Revolution Video Questions
26B / 27A HW #3 DUE
Project Workday / 28B / 29A PROJECT DUE
EXTRA CREDIT DUE
Presentations / 30B
NO SCHOOL – OCT BREAK / 4 / 5 / 6 / 7
10 / 11 / 12 / 13 / 14
HW assignments on the back ------
HW #1 (10 Points) DUE: Wednesday, September 21st
Read Section (pgs. 206-211) in Modern World History.
1) Why might Parliament want to restrict American colonial trade?
2) Why would taxation without representation seem unfair to Enlightenment thinkers?
3) Was the Declaration of Independence justified or was it treason?
4) Why would the states want to avoid a strong national government?
5) The delegates at the Constitutional Convention argued for months. What united and motivated them for so long?
6) Why might it be important to have a Bill of Rights that guarantees basic rights?
7) Summarize in two paragraphs the ideas of the American Revolution concerning separation of powers, basic rights of freedom and popular sovereignty.
Key terms: Use these in your answers.
Declaration of Independence, Thomas Jefferson, checks and balances, federal system, Bill of Rights, Articles of Confederation, Constitution
HW #2 (10 Points) DUE: Friday, September 23rd
Read (pgs. 217-227 in Modern World History
1) What did the clergy do for society that might justify their low tax rate?
2) What group within the 3rd Estate would suffer the most from the increase in the price of bread?
3) Why do you think Louis chose to raise taxes on the nobility?
4) Why did nobles expect each estate to have one vote?
5) What results would show that the National Assembly was a legitimate government?
6) After years of oppression, what finally caused the French people to revolt?
7) What can you infer about the power of Louis from his signing of the 1791 Constitution?
8) In what way was the National Convention that took office in September 1792 more radical that the National Assembly of September 1791?
9) What does the large number of executions among the urban poor and middle class suggest about support for the revolution?
10) What reasons did the members of the National Convention and the public have for opposing the Reign of Terror?
Key terms: Use in your answers.
Old Regime, estates, Louis XVI, Marie Antoinette, Estates-General, National Assembly, Legislative Assembly, émigré, sans-culotte, Jacobin, guillotine, Maximillian Robespierre, Reign of Terror
HW #3 (8 Points) DUE: Tuesday, September 27th
In your textbook read (pgs. 247 – 252) in Modern World History
1) How can people have such different philosophies?
2) How did nationalism blur the line between philosophies?
3) Why did leaders of powerful countries oppose revolution even when not directed at them?
4) How were the revolutions in Italy different from the revolutions in Greece, Belgium, and Poland?
5) How were the actions of the radicals contrary to their philosophy?
6) Was the election of Louis-Napoleon a victory for the radicals?
7) How did Russia’s defeat in the Crimean War push it towards political reform?
8) Were the peasants better off after the serfs were freed?
Key terms: Use these in your answers. Conservative, liberal, radical, nationalism, nation-state, the Balkans, Louis-Napoleon, Alexander II |
How often does a big rock drop on our planet from space? As we've gotten a better understanding of the impact that did-in the dinosaurs, that knowledge has compelled people to take a serious look at how we might detect and divert asteroids that pose a similar threat of planetary extinction. But something even a tenth of the size of the dinosaur-killer could cause catastrophic damage, as you could easily determine by placing a 15km circle over your favorite metropolitan center.
So, what's the risk of having a collision of that nature? It's actually hard to tell. The easiest way to tell is to look for past impact craters and try to figure out the frequency of these impacts, but the Earth has a habit of erasing evidence. So, instead, a group of scientists figured out a clever way of looking at the Moon, which should have a similar level of risk. They found that the rate of impacts went up about 300 million years ago.
Some impact craters on Earth are pretty obvious, but erosion and infilling with sediments make others much harder to find. We wouldn't have noticed Chicxulub or the Chesapeake Bay Crater were there if we hadn't stumbled across them for other reasons. As we go back in time, plate tectonics can erase evidence of impacts from the sea floor, as the rock they reside in gets subducted back into the mantle. And then, about 550 million years ago, the Great Unconformity wipes off any evidence of impacts that might have been left on land.
So, while we've come up with some rough estimates of impact rates on Earth, we don't have a ton of confidence in them.
But there's a nearby object without all the messy issues of plate tectonics, sediment deposits, and erosion. The Moon obviously preserves a clearer record of its impacts and is close enough that it should have a similar impact history to Earth's. That's in part because whatever it is that knocks an object out of a stable orbit and sends it toward Earth tends to break it up, creating a collection of fragments that gradually find their way to the Earth-Moon system. It may take millions of years for them all to hit, but the hits on both bodies should show a similar increased risk of impacts.
So how do you identify when impacts on the Moon happened? This is where the research gets very clever. The Moon's surface is covered with a carpet of dust called regolith, formed as rocks are broken down by small impacts and charged particles. But a large enough impact will blast away the regolith and spray out chunks of solid rock, essentially resetting the process. Over time, this rock will gradually decay to regolith, and the extent of the decay should be proportional to the time since the impact.
Reading the heat
Unfortunately, we probably can't convince China to send rovers to every crater to figure out how much regolith is there. So the researchers figured out how to do it remotely, using an instrument on the Lunar Reconnaissance Orbiter. It turns out that, during the lunar day, both rock and regolith get heated up by the Sun. Once the Sun sets, though, that heat starts to escape back out into space, where it can be detected as infrared radiation by the Lunar Reconnaissance Orbiter. But it escapes much more quickly from regolith than solid rock, meaning that, in the deep of lunar night, there's still heat escaping from rocky features, while it has mostly gone from regolith-filled ones.
Starting with nine craters with known ages, the researchers confirmed that this is what we see on the Moon and calibrated the timing of the rocks' decay. They then used this scale to estimate the ages of craters over 10 kilometers. This, incidentally, indicated that the thermal emissions of a crater dropped to the regolith background in about a billion years. So, while we can't get a complete history of the impact rate, we can cover the Great Unconformity, as well as any recent changes.
One thing that was clear is that the size of things hitting the Moon hasn't changed. The authors found that there was "no correlation between crater sizes and crater ages, meaning differently sized craters are randomly distributed in time." But the rate of impacts did show a shift at about 400 million years ago, at which point the rate roughly doubled. In other words, we face a much higher risk (though still extremely low) of seeing an impact than the trilobites did.
Unfortunately, the uncertainty range was really large. The increase could be anywhere from 1.4x up to 20x, and its timing was similarly broad. So, the researchers turned to the Earth, performing a similar analysis using the age of craters 20km across and above. As expected, the Earth also saw a change in rate, and it was of similar magnitude to the one seen on the Moon. When the two rates are combined to a single measure, the uncertainties go down: the most likely date is about 290 million years ago, and the rate probably went up by 2.6x.
The authors suggest that the increase could be the product of one or more large asteroids in the main belt, which could send smaller bodies out that create a small "wave" of arrivals lasting hundreds of millions of years. If that's the case, then there may only be a small population of bodies left in an orbit that puts Earth at risk of collision.
Again, the risks are small, and doubling a near-zero risk still leaves it at near zero. But the results make it clear that our Solar System is an active place even billions of years after its formation. |
Tutor profile: Allison A.
A ladder is placed 5 feet away from at billboard. If the top of the ladder meets at the ledge of the billboard that is 12 feet tall how long is the ladder?
The first thing is you need to realize that the figure formed is a triangle and it is a right triangle. When you are given two sides of a right triangle you can always find the third side of the triangle with the pythagorean theorem which is a^2+b^2=c^2. In this case we would draw the situation and we would see that the ladder would be the hypothenuse and that the hypothenuse is c in the pythagorean theorem. So all we have to do is plug in the numbers give after we do so algebra. The opposite of a squared is the squared root so c=sqroot(a^2+b^2)=sqroot(5^2+12^2)=sqroot(169)=13
Balance the following equation: Fe+ H2SO4 ---> Fe2(SO4)3+ H2
First thing in balancing equation is knowing that the equation needs to have the same number of element on each side even though its in a different compound. You first want to write what elements you have on each side. Fe H S O Once written down you can know use it as you table to help count the number of each element. So with out adding any coefficients to the equation the numbers would look like this. 1 Fe 2 2 H 2 1 S 3 4 O 12 the right answer is 2 Fe+ 3 H2SO4 ---> Fe2(SO4)3+ 3 H2 2 Fe 2 6 H 6 3 S 3 12 O 12
Solve for x. (2-x)^2=2x^2+8
4-4x+x^2=2x^2+8 0=x^2+4x+4 0=(x+2)(x+2) x=-2 You first need to foil out (2-x)^2. You do this by (2-x)(2-x) and distribute the first number then the outer numbers the inners then the last. If i would use letters to describe this I would (a-b)(c-d)= ac+a(-d)+(-b)c+(-b)(-d). This would then be set to the rest of your equation. The next set would be to subtract on side to the other to get the equation to equal zero and combine like terms. now you are going to factor into to find two number when added together they make 4 as well as when they are multiplied. In our case it was 2 so it came out to be 0=(x+2)^2. We then solve for x by knowing 0=x+2 x=-2 we do this by subtraction 2 from each side.
needs and Allison will reply soon. |
Micrometer-scale light emitters that can be incorporated into electronic chips could enable faster computation and communication systems in compact devices. Researchers at KAUST have developed a simple technique for fabricating optically active semiconductors on a metal substrate, showing that the devices work at room temperature and do not overheat.
The success of the modern electronics industry rests on the ability to fabricate thousands of electronic components on a single silicon chip. Optical devices could benefit in the same way from such an integrated-circuit approach, enabling a cheaper platform for optical communications or portable optical sensors. However, silicon is not the ideal material for optical applications, so an alternative material is required.
Chao Zhao, TienKhee Ng, Boon Ooi and colleagues from the Computer, Electrical and Mathematical Science and Engineering Division used the semiconductor gallium nitride to develop a platform for high-power light emission1.
“Nitride-based materials have been intensively studied for photonics applications such as solid-state lighting and displays because the alloys have direct bandgaps that cover the entire visible spectrum,” explained Zhao.
Gallium nitride-based structures have been created on silicon, sapphire and glass substrates. These materials are flawed, however, because they impair heat flow out of the device, causing a rise in temperature that eventually leads to malfunction.
The KAUST team fabricated gallium-nitride nanowires on a metal substrate instead; this substrate has better thermal properties. They started with a molybdenum substrate on which they laid down titanium and then titanium nitride using a technique called molecular beam epitaxy.
The light-emitting region itself was built up of alternating layers of gallium nitride and indium gallium nitride. This material self-assembled into vertical nanowires with diameters between 40 and 110 nanometers and 300 nanometers long. Each light-emitting diode incorporated many of these nanowires in a cylinder 200 micrometers across.
With better thermal properties than previously studied substrates, molybdenum can also act as the bottom electrical contact required to power the devices. This simplified the construction of the final devices.
The red light emitting diode worked at room temperature and exhibited no signs of overheating. For example, the researchers observed no drop in operation efficiency (known as thermal droop) as the current through the device increased. The emission color from the device also did not shift.
“Our work revolutionizes the semiconductor crystal growth technology and also realizes a practical platform for high-power nanowires light-emitters,” stated Zhao. “This uncovers new applications in high-power optoelectronics, high-speed and power electronics, display technology, energy conversion and green technologies.”
- Zhao, C., Ng, T. K., Wei, N., Prabaswara, A., Alias, M. S. et al. Facile formation of high-quality InGaN/GaN quantum-disks-in-nanowires on bulk-metal substrates for high-power light-emitters. Nano Letters 16, 1056−1063 (2016).| article |
For millions of years, nine species of large, flightless birds known as moas (Dinornithiformes) thrived in New Zealand. Then, about 600 years ago, they abruptly went extinct. Their die-off coincided with the arrival of the first humans on the islands in the late 13th century, and scientists have long wondered what role hunting by Homo sapiens played in the moas’ decline. Did we alone drive the giant birds over the brink, or were they already on their way out thanks to disease and volcanic eruptions? Now, a new genetic study of moa fossils points to humankind as the sole perpetrator of the birds’ extinction. The study adds to an ongoing debate about whether past peoples lived and hunted animals in a sustainable manner or were largely to blame for the extermination of numerous species.
“The paper presents a very convincing case of extinction due to humans,” says Carles Lalueza-Fox, an evolutionary biologist at the Institute of Evolutionary Biology in Barcelona, Spain, who was not involved in the research. “It’s not because of a long, natural decline.”
Scientists have long argued about what caused the extinction of many species of megafauna—giant animals including mammoths, mastodons, and moas—beginning between 9000 and 13,000 years ago, when humans began to spread around the world. Often, the animals disappeared shortly after humans arrived in their habitats, leading some researchers to suggest that we exterminated them by overhunting. But other scientists have pointed to natural causes, including volcanic eruptions, disease, and climate change at the end of last Ice Age, as the key reasons for these species’ demise. The moas present a particularly interesting case, researchers say, because they were the last of the giant species to vanish, and they did so recently, when a changing climate was no longer a factor. But did other natural causes set them on a path to oblivion, as some scientists proposed in a recent paper?
Morten Allentoft, an evolutionary biologist at the University of Copenhagen, doubted this hypothesis. Archaeologists know that the Polynesians who first settled New Zealand ate moas of all ages, as well as the birds’ eggs. With moa species ranging in size from 12 to 250 kilograms, the birds—which had never seen a terrestrial mammal before people arrived—offered sizable meals. “You see heaps and heaps of the birds’ bones in archaeological sites,” Allentoft says. “If you hunt animals at all their life stages, they will never have a chance.”
Using ancient DNA from 281 individual moas from four different species, including Dinornis robustus (at 2 meters, the tallest moa, able to reach foliage 3.6 meters above the ground), and radiocarbon dating, Allentoft and his colleagues set out to determine the moas’ genetic and population history over the last 4000 years. The moa bones were collected from five fossil sites on New Zealand’s South Island, and ranged in age from 12,966 to 602 years old. The researchers analyzed mitochondrial and nuclear DNA from the bones and used it to examine the genetic diversity of the four species.
Usually, extinction events can be seen in a species’ genetic history; as the animals’ numbers dwindle, they lose their genetic diversity. But the team’s analysis failed to find any sign that the moas’ populations were on the verge of collapse. In fact, the scientists report that the opposite was true: The birds’ numbers were stable during the 4000 years prior to their extinction, they report online today in the Proceedings of the National Academy of Sciences. Populations of D. robustus even appear to have been slowly increasing when the Polynesians arrived. No more than 200 years later, the birds had vanished. “There is no trace of” their pending extinction in their genes, Allentoft says. “The moa are there, and then they are gone.”
The paper presents an “impressive amount of evidence” that humans alone drove the moa extinct, says Trevor Worthy, an evolutionary biologist and moa expert at Flinders University in Adelaide, Australia, who was not involved with the research. “The inescapable conclusion is these birds were not senescent, not in the old age of their lineage and about to exit from the world. Rather they were robust, healthy populations when humans encountered and terminated them.” Still, he doubts even Allentoft’s team’s “robust data set” will settle the debate about the role people played in the birds’ extinction, simply because “some have a belief that humans would not have” done such a thing.
As for Allentoft, he is not surprised that the Polynesian settlers killed off the moas; any other group of humans would have done the same, he suspects. “We like to think of indigenous people as living in harmony with nature,” he says. “But this is rarely the case. Humans everywhere will take what they need to survive. That’s how it works.” |
Being confronted with a pack of wolves is bad enough, but if you happened to be in Alaska some 12,000 years ago, things would be much, much worse. Back then, the icy forests were patrolled by a sort of super-wolf. Larger and stronger than the modern gray wolf, this beast had bigger teeth and more powerful jaws, built to kill very large prey.
This uber-wolf was discovered by Jennifer Leonard and colleagues from the University of California, Los Angeles. The group were studying the remains of ancient gray wolves, frozen in permafrost in eastern Beringia, a region that includes Alaska and northwest Canada. These freezer-like conditions preserved the bodies very well, and the team found themselves in a unique position. They could not only analyse the bones of an extinct species, but they could extract DNA from said bones, and study its genes too.
For their first surprise, they found that these ancient wolves were genetically distinct from modern ones. They analysed mitochondrial DNA from 20 ancient wolves and none of them was a match for over 400 modern individuals. Today’s wolves are clearly not descendants of these prehistoric ones, which must have died out completely. The two groups shared a common ancestor, but lie on two separate and diverging branches on the evolutionary tree.
The genes were not the only differences that Leonard found. When she analysed the skulls of the Beringian wolves, she found that their heads were shorter and broader. Their jaws were deeper than usual and were filled with very large carnassials, the large meat-shearing teeth that characterise dogs, cats and other carnivores (the group, not meat-eaters in general).
This was the skull of a hypercarnivore, adapted to eat only meat and to kill prey much larger than itself using bites of tremendous force. Leonard even suggests that the mighty mammoths may have been on their menu.
Once prey was dismembered, the wolves would have left no bones to waste. With its large jaws, it could crush the bones of recent kills, or scavenge in times between hunts. Today, spotted hyenas lead a similar lifestyle. The wolves’ teeth also suggest that bone-crushing was par for the course. The teeth of almost all the specimens showed significant wear and tear, and fractures were very common.
Their powerful jaws allowed the Beringian wolves to quickly gobble down carcasses, bones and all, before having to fend off the competition. And back then, the competition included many other fearsome and powerful hunters, including the American lion and the short-faced bear, the largest bear to have ever lived.
Leonard suggests that the ancestor of today’s gray wolf reached the New World by crossing the Bering land bridge from Asia to Alaska. There, it found a role as a middle-sized hunter, sandwiched between a smaller species, the coyote, and a larger one, the dire wolf. When the large dire wolves died out, the gray wolf split into two groups. One filled the evolutionary gap left behind by the large predators by evolved stronger skulls and teeth. The other carried on in the ‘slender and fast’ mold.
But in evolution, the price of specialisation is vulnerability to extinction. When its large prey animals vanished in the Ice Age, so too did the large bone-crushing gray wolf. Its smaller and more generalised cousin, with its more varied diet, lived to hunt another day.
Similar things happened in other groups of meat-eaters. The American lion and sabre-toothed cats went extinct, but the more adaptable puma and bobcat lived on. The massive short-faced bear disappeared, while the smaller and more opportunistic brown and black bears survived. Leonard’s findings suggests that the casualties of the last Ice Age extinction were more numerous than previously thought. What other predators still remain to be found in the permafrost?
Reference: Leonard, Vila, Fox-Dobbs, Koch. Wayne & van Valkenburgh. 2007. Megafaunal extinctions and the disappearance of a specialized wolf ecomorph. Curr Biol doi:10.1016/j.cub.2007.05.072
More on extinct genes and proteins:
- Dinosaur proteins, cells and blood vessels recovered from Bracyhlophosaurus
- Sequencing a mammoth genome |
1. Types disease
2. development and causes of uveitis
3. Symptoms of uveitis eye
4. diagnosis and conventional treatment of uveitis
5. Radical treatments
Uveitis eye - a common name of inflammatory processes in different parts of the vascular membranes of the eyes (iris, choroid, ciliary body).
include inflammation of the eye at a fraction of uveitis (inflammation of the uveal tract) account for about 30-57% of diseases.Thus 25-30% of uveitis may lead to visual impairment, and even blindness.
The article describes the causes of uveitis, symptoms of the disease, as well as methods of conservative and radical treatment of uveitis eye.
Vascular (uveal) shell includes the iris, ciliary or ciliary body and the choroid (choroid, located under the retina).
Depending on the predominant position of the inflammatory process, there are the following forms of uveitis eyes:
greater prevalence of uveitis promotes what the eye vascular system is very branched, and, at the same time, blood flow inside it is rather slow.This leads to an accumulation of choroid various microorganisms causing inflammation.Another feature of vascular blood flow path is separate him of the anterior (iris and ciliary body) and posterior section (choroid).Blood supply of the anterior part is done by the front and rear of long ciliary arteries, and the rear division - back short ciliary arteries.As a result, the front and rear sections of the vascular tract usually affects separately, contributing to higher incidence of uveitis eyes.
Experts call many reasons uveitis.There are the following reasons.
1.Infektsii.The most common cause of uveitis eyes (44%).The causative agents of inflammation are streptococci, mycobacterium tuberculosis, pale treponema, toxoplasmosis, herpes simplex virus, cytomegalovirus, fungi.Infectious uveitis develop due to infection penetration into the bloodstream.They are often in sepsis, dental caries, tonsillitis, sinusitis, viral diseases, syphilis, tuberculosis.
2.Allergicheskie reaction.The cause of uveitis, allergic etiology is sensitive to environmental factors - hay fever, food and drug allergies.Sometimes this form of the disease develops with the introduction of some vaccines and serums.
3.Sindromnye and systemic diseases.Uveitis often appear in rheumatism, rheumatoid arthritis, psoriasis, spondylitis, sarcoidosis, multiple sclerosis, Reiter's syndrome, ulcerative colitis and other diseases.
4.Travmy.The cause of uveitis can be eye burns, foreign bodies entering the eye, contusion or penetrating damage to the eyeball.
5.Gormonalnaya dysfunction and metabolic disorders.There are many disease states that can become a cause of uveitis.Most often it is diabetes, menopause in women, diseases of eye, blood system diseases and other pathologies.
Symptoms of uveitis depend on the localization of the inflammatory process, the severity of the disease, the general state of the organism.
Anterior uveitis manifests itself in acute irritation, redness and pain of the eyeballs, contraction of the pupil of the eye, photophobia, lacrimation, deterioration of visual acuity.The patient often increases intraocular pressure.The symptoms of the chronic form of uveitis little pronounced.Most often the disease develops a little red eyes, flickering dots before his eyes.
In addition, a symptom of uveitis front of the choroid is the formation of corneal precipitates.The precipitates are a cluster of lymphocytes, plasma cells, macrophages, pigment "dust" that float in the chamber moisture.
Symptoms of uveitis periphery of the choroid is reduction of central vision, the occurrence of haze before his eyes.For this form of the disease is characterized by loss of both eyes.
There are symptoms of uveitis of the posterior choroid.Typically, the patient noted a decrease in visual acuity, "floating" point of vision, distortion of objects, blurred vision.In some cases, there is macular ischemia (severe disturbance of blood supply to the central part of the retina), macular edema (swelling of the central part of the retina), retinal detachment and other complications.
most difficult runs iridotsiklohorioidit - inflammation of the vascular tract eyes.Usually this form of uveitis develops in sepsis.She often accompanies Panophthalmitis (purulent inflammation of the eyeball) and endophthalmitis (purulent inflammation of internal tissues of the eyeball).
diagnosis of uveitis include an external examination by an ophthalmologist eye, visual acuity, pupillary reaction analysis, the definition of the boundaries of fields.In addition, the physician measures intraocular pressure.
Using special machines specialist conducts biomicroscopy (microscopy ocular tissues with the help of a slit lamp), gonioscopy (analysis of the front of the eye).
For accurate diagnosis of uveitis is usually used laboratory methods of research, such as RPR-test, determination of antibodies to major infections.
Complex treatment of uveitis consists of conservative therapy, surgery, physiotherapy.
When conservative treatment of uveitis is most often used broad-spectrum antibacterial action.In chronic course of the disease antibiotics conducts periodic courses.In addition, the patient is prescribed corticosteroids (anti-hormones).If it is impossible or ineffective use of corticosteroids, used immunosuppressants (medications that suppress the immune response).
Surgical treatment of uveitis, the eye is used for severe disease or the development of complications.Usually, when surgery is carried out treatment vitreous opacities, cut the front or rear of the iris adhesions (adhesions of the iris to the lens surface).
most frequently used treatment for uveitis physiotherapy techniques - laser irradiation of blood, and ultraviolet blood irradiation.These procedures significantly increase the bactericidal activity of the patient's blood.In the treatment of uveitis horioretinicheskoy forms are often used laser coagulation.
This article is available exclusively in the educational purposes and is not research material or professional medical advice.
make an appointment to see a doctor
Contents: 1. Pathogenesis 2. Degrees coma 3. Diagnostics 4. Treatment coma 5. forecast coma Coma is the most signific...
Contents: 1. Causes 2. Symptoms 3. Treatment annular erythema annular erythema disease called dermatitis class at which appe...
Contents: 1. colloid cyst of thyroid 2. Causes, Symptoms and Treatment of colloid cysts of the thyroid gland 3. colloid cyst of t... |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
October 5, 1996
Explanation: 1500 light years away lies a nebula of quite peculiar shape. How did the dark dust cloud shown above come to be shaped like a horse's head? Nobody knows! Barnard 33, as this region is known to some, is surely a dark dust cloud absorbing the light from the bright red emission nebula behind it. The Horsehead Nebula is also thought to be a region where low-mass stars form. But the reason for gross shapes in the universe is frequently poorly understood. Perhaps there is no simple explanation in this case. Some stars are thought to be efficient creators of dust, while others are much better at destroying it. The Horsehead Nebula's dust distribution might just be the result of a specific irregular distribution of stars and gas in its vicinity.
Authors & editors:
NASA Technical Rep.: Sherri Calvo. Specific rights apply.
A service of: LHEA at NASA/ GSFC |
In 1906, British New
Guinea became Papua, and administration of the region was taken over by newly independent Australia.
With the outbreak of WWI, Australian troops promptly secured the German headquarters at Rabaul, subsequently taking control
of German New Guinea. In 1920, the League of Nations officially handed it over to Australia as a mandated territory. During WWII the northern islands and most of
the northern coast fell to the Japanese who advanced southward until stalled by Allied forces.
By 1945 the mainland
and Bougainville had been recaptured, but the Japanese were impregnable in New Ireland and especially Rabaul in New Britain, where they dug 500km of tunnels. They surrendered these
strongholds at the end of the war. Post-war, the eastern half of New Guinea
reverted to Australia and became the Territory
of Papua & New Guinea. Indonesia took
control of Dutch New Guinea in 1963 (incorporating it into the Indonesian state as Irian Jaya). PNG was granted self-government
in 1973, and full independence was achieved in 1975.
Papua New Guinea's most immediate concern after independence
was its relations with powerful neighbour Indonesia.
After Indonesia's takeover of Irian Jaya, many West
Papuans organised a guerrilla resistance movement - Organisasi Papua Merdeka (OPM) - which fought Indonesian forces
with limited success. Tensions decreased markedly after 1985, as the flow of refugees (estimated at over 10,000) between Irian
Jaya (now called West Papua) and PNG slowed. There are still 7500 West Papuan refugees living
in camps in Western Province
- the largest expatriate group in the country. |
English civil war
The Long Parliament
The disasters of the second Scottish war compelled a virtual surrender by the king to the opposition, and the Long Parliament was summoned (Nov., 1640). The parliamentarians quickly enacted a series of measures designed to sweep away what they regarded as the encroachments of despotic monarchy. Those imprisoned by the Star Chamber were freed. A Triennial Act provided that no more than three years should elapse between sessions of Parliament, while another act prohibited the dissolution of Parliament without its own consent. Ship money and tonnage and poundage without parliamentary authorization were abolished. Strafford was impeached, then attainted and executed (1641) for treason; Laud was impeached and imprisoned. Star Chamber and other prerogative and episcopal courts were swept away. However, discussions on church reform along Puritan lines produced considerable disagreement, especially between the Commons and Lords.
Despite the king's compliance to the will of the opposition thus far, he was not trusted by the parliamentary party. This distrust was given sharp focus by the outbreak (Oct., 1641) of a rebellion against English rule in Ireland; an army was needed to suppress the rebellion, but the parliamentarians feared that the king might use it against them. Led by John Pym, Parliament adopted the Grand Remonstrance, reciting the evils of Charles's reign and demanding church reform and parliamentary control over the army and over the appointment of royal ministers. The radicalism of these demands split the parliamentary party and drove many of the moderates to the royalist side. This encouraged Charles to assert himself, and in Jan., 1642, he attempted to arrest in person Pym and four other leaders of the opposition in Commons. His action made civil war inevitable.
In the lull that followed, both Parliament and the king sought to secure fortresses, arsenals, and popular support. In June, 1642, Parliament sent to the king a statement reiterating the demands of the Grand Remonstrance, but since the proposals amounted to a complete surrender of sovereignty by the crown to Parliament, the king did not even consider them as a basis for discussion. Armed forces (including many peers from the House of Lords and a sizable minority of Commons) gathered about him in the north. Parliament organized its own army and appointed Robert Devereux, 3d earl of Essex, to head it. On Aug. 22, 1642, Charles raised his standard at Nottingham.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on English civil war The Long Parliament from Fact Monster:
See more Encyclopedia articles on: British and Irish History |
Learning Strategy: Action Songs Vs Vocal Songs
Music and rhyme have a very powerful way of influencing learners. Anyone can learn any language or even remember boring numbers and nonsensical terms, through music and song. For someone, who wants to learn music through song and lyrics, there are different ways they can learn the song like writing, listening, sing-along etc. To be able to learn music effectively, the learner should use as many senses as possible to learn. The same concept is what is used when teaching children, the basics of language and life lessons.
Pre-schoolers and toddlers find bright colours and songs very inviting when they learn. Children’s songs are also easy to learn and memorise, because of the actions and dance moves they have. The same logic applies to adults who wish to learn English. Singing along to songs that have actions in them helps the learner to understand the words through actions. This helps, in retaining the information as well.
Songs that don’t have actions in them can be memorised if the learner comes up with his own actions for the song. Translating the song into the native language of the learner, before coming up with actions for it will be helpful. This way the learner knows exactly what the words mean, and they can retain the information better as well.
Learning English through songs that are only sung is also effective. However, the chances of retaining are better with action songs. If you do not wish to use an action song to learn music, you will need to write and memorise the words and remember their usage. The chances of remembering the words is lesser, in this case. Secondly, voice only songs are not very entertaining and does not involve the use of as many senses, to learn English.
Think of the music videos that you see on TV. The popular song Gangnam Style is in a language that most of the world does not understand. However, because of the unique dancing in the song, even children can say the words of the song. So, unless there is a unique factor in the song that will help the learner, chances of retention are low.
People who learn English from action songs also pay attention to the song better. When you see a video of a person dancing and using their hands and feet to communicate the meaning of the song, it makes more sense and has appeal. Apart from learning English, learners also find that they can develop some flexibility and add an element of fun, when they learn through action songs.
Repetitive learning through action songs ensures that the concepts of the words in the lyrics are reinforced. The learner can retain more information, because of the repetitive nature of the lyrics. Leaners also are more confident, when they learn through action songs. They are confident of their movements because they know what the words mean and are able to partially communicate, when they converse with others. |
laboratory technicians -- also known as manufacturing opticians, optical
mechanics, or optical goods workers -- make prescription eyeglass or
contact lenses. Prescription lenses are curved in such a way that light
is correctly focused onto the retina of the patient's eye, improving his
or her vision. Some ophthalmic laboratory technicians manufacture lenses
for other optical instruments, such as telescopes and binoculars.
Ophthalmic laboratory technicians cut, grind, edge, and finish lenses
according to specifications provided by dispensing opticians,
optometrists, or ophthalmologists and may insert lenses into frames to
produce finished glasses. Although some lenses still are produced by
hand, technicians are increasingly using automated equipment to make
Ophthalmic laboratory technicians should not be confused with workers in
other vision care occupations.
optometrists are "eye doctors" who examine eyes, diagnose and treat
vision problems, and prescribe corrective lenses. Ophthalmologists are
physicians who also perform eye surgery. Dispensing opticians, who also
may do the work of ophthalmic laboratory technicians, help patients
select frames and lenses, and adjust finished eyeglasses.
laboratory technicians read prescription specifications, select standard
glass or plastic lens blanks, and then mark them to indicate where the
curves specified on the prescription should be ground. They place the
lens in the lens grinder, set the dials for the prescribed curvature,
and start the machine. After a minute or so, the lens is ready to be
"finished" by a machine that rotates it against a fine abrasive, to
grind it and smooth out rough edges. The lens is then placed in a
polishing machine with an even finer abrasive, to polish it to a smooth,
Next, the technician
examines the lens through a lensometer, an instrument similar in shape
to a microscope, to make sure that the degree and placement of the curve
are correct. The technician then cuts the lenses and bevels the edges to
fit the frame, dips each lens into dye if the prescription calls for
tinted or coated lenses, polishes the edges, and assembles the lenses
and frame parts into a finished pair of glasses.
In small laboratories,
technicians usually handle every phase of the operation. In large ones,
in which virtually every phase of the operation is automated,
technicians may be responsible for operating computerized equipment.
Technicians also inspect the final product for quality and accuracy.
Note: Some resources in this section are provided by the US Department
of Labor, Bureau of Labor Statistics. |
Britain and the Beginning of Scotland
British Academy: Sir John Rhŷs Lecture, 5 December (2013)
Until recently it was generally held that Scotland first began to take shape with a union of Picts and Scots under Cinaed mac Ailpín, who died in 858. For example, Edward James in his Britain in the First Millennium, published in 2001, referred to how ‘a king of Dál Riata, Cinaed mac Ailpín (Kenneth mac Alpine), definitively united the Picts and the Scots into a new kingdom’,so that ‘in the middle of the ninth century the kingdom of Scotland is unified, under Cinaed mac Ailpín (840/2–858), a Gaelic rather than a Pictish king.
Cinaed was the common ancestor in the male line of kings of Scots from around 890 until 1034. This alone could explain how he came to be regarded in the tenth century as one of the kingdom’s founding figures. If so, he would only have gained this status retrospectively. Be this as it may, there is no longer a consensus about his role, or about whether he was a Gael or a Pict. Some have abandoned the notion of Cinaed as founder but have still retained the idea that a new, united kingdom emerged in the end of the ninth century—‘a homologated kingship of Picts and Scots’, to quote Archie Duncan in 2002. |
Ordering Decimals Worksheets
This webpage encompasses a combination of worksheets based on ordering decimals with a view to enhance a student's knowledge on decimals and their place values. A number of worksheets are stacked with a variety of exercises include ordering decimals in place value boxes, using the number line, and using the greater than and less than symbols. Riddle worksheets require you to order decimals to decode the riddles that are sure to tickle your funny bone! Click on the various download options to access the entire gamut of worksheets under this topic.
This assortment of 70+ worksheets consists of captivating exercises and activities on comparing decimals using greater than, lesser than and equal to symbols.
- Comparing Decimals Worksheets (75 Worksheets)
Ordering decimals: Place value boxes
Keenly observe each set of decimals and fill them in the correct place value boxes provided. Order the decimals from the least to the greatest and vice versa.
Ordering decimals: Greater than and less than symbols
Decimal numbers are given in random order. Set them in the correct order in accordance with the greater than and less than symbols provided. There are seven problems in each worksheet.
Ordering decimals: Standard
Order each set of decimals in either increasing or decreasing order. Levels 1, 2 and 3 contain decimals up to hundredths, thousandths, and ten thousandths respectively.
Ordering decimals: Riddles
Read each decimal number displayed on these vivid theme-based worksheets. Order them in either increasing or decreasing order to decode the rib-tickling riddles!
Ordering decimals: MCQs
Identify the correct sequence of decimals in either increasing or decreasing order with this set of MCQs. This activity forms a perfect tool in evaluating a child's analytical and logical skills.
Ordering decimals: Number lines
Read the number line. Arrange each set of decimals in either increasing or decreasing order as specified. Rule: Decimals to the right of the number line will always be greater than the decimals to the left of it. |
This chapter is an introduction to how computer simulation can be used in business to improve the decisions that business managers make. In the scenarios that this chapter addresses, improved decisions result in increased profit, reduced cost, and better service to customers. The chapter first describes basic business concepts such as profit, customer service, capacity, and demand. Understanding these terms is important because one purpose that simulation can serve is to help managers make decisions about how much investment should be made in the capacity to make goods and provide services in response to customer demand, so that profit and customer service are improved. A familiar setting is used as an example in this chapter, specifically, a fast food restaurant. The lessons in this chapter lead to a description of how a computer simulation model might be created to mimic the operations of the restaurant to help a manager understand how different decisions can lead to different levels of profit and customer service. Essential background information and skills are provided throughout the chapter, such as the concept of probability distributions and Excel spreadsheet functions that are required to create the simulation model.
This chapter seeks to:
- Introduce business terminology that is generally applicable to all businesses.
- Define what demand, capacity, inventory, and customer service mean in business.
- Provide a basic understanding of variation and uncertainty.
- Describe how variation and uncertainty make business decisions more difficult.
- Explain how computer simulation can take uncertainty into account to enable better decisions.
- Provide a tutorial on how to build a simple spreadsheet simulation in Microsoft Excel.
As such, this chapter introduces simulation to students in the hope that they will become excited about the topic and decide to study it further, along with related topics such as probability and statistics. Those who find this subject matter interesting might want to consider jobs in the field of industrial engineering or in businesses where the kind of analysis that is described in this chapter is studied and used. The chapter also suggests that, while being a business manager is not usually considered to be a scientific job, scientific methods like simulation can be used in businesses to improve profit. |
Probably the grandest naval battle of World War II took place in 1941 and ended with the sinking of the German battleship Bismarck. She was a massive ship, about 17 stories tall and 300 yards (275 meters) long, and was one of the fastest (30 knots) and most heavily armored warships of her time. On her maiden voyage, she ended up in an epic, eight-day cat-and-mouse chase across the Atlantic involving two German ships and at least six British ones.
In the first major confrontation at Denmark Strait, the Bismarck and her sister ship destroyed the British ship HMS Hood (the largest battle cruiser in the world at the time). Their 203-mm shells struck the Hood and blew up the explosives on deck, and their 380-mm shells blew right threw the deck and reached the ammunition below. The Hood was a total loss, and almost 1,400 sailors died. Also at Denmark Strait, the German ships crippled the HMS Prince of Wales, but the Prince of Wales hit Bismarck a few times, too, which would haunt the German ship and ultimately be her downfall. The Prince of Wales struck her engine room, taking out two boilers, and ruptured the fuel tanks in the bow. The Bismarck no longer had enough fuel to get back to Germany, and her top speed had dropped to 28 knots. Bismarck's Admiral Lutjens changed course and headed to the nearest German-occupied French port.
On her way to safe harbor for repairs, the Bismarck took air fire from British torpedo planes, but none of the torpedoes got through her armor. After some evasive maneuvering, Bismarck escaped the British ships, and Admiral Lutjens then took the chance of sending a message to Berlin about the battle and her status. But the message was too long -- almost 30 minutes. British ships picked it up and pinpointed Bismarck's location. They resumed pursuit and caught up with her off the coast of Ireland.
Bismarck took heavy fire from both air and sea in what would turn out to be her final battle. In total, she took 400 hits from British battleships and at least 12 hits by torpedo planes. She was crippled, but she still wasn't sinking. It was the German sailors onboard who sunk the ship when it was clear they could no longer fight or escape. Bismarck sunk to the bottom of the Atlantic with most of her crew, including Admiral Lutjens. Of the 2,200 German sailors onboard, only 115 survived the wreck. In 1989, a crew led by Dr. Robert Ballard, who also discovered the wreck of the Titanic, found the Bismarck under 15,000 feet of water just south of Cork, Ireland. |
Liaoning province in northeast China is famous for having some of the best preserved fossil dinosaurs in the world. It was a series of discoveries in Liaoning of feathered dinosaurs that cemented the idea that birds evolved from theropod dinosaurs of the course of millions of years. The newest find in Liaoning isn’t just a feathered dinosaur — it’s a feathered dinosaur with four wings.
The animal, dubbed Changyuraptor yangi, doesn’t have extra limbs. There are still only four, but all four of them appear to have evolved into wings with long flight feathers. The length of the flight feathers is actually remarkable as well. No other feathered dinosaur yet discovered had flight feathers anywhere near as long. The feathers were nearly a foot long, and the creature itself was only about four feet in length.
Changyuraptor yangi would have been able to glide a considerable distance with its quad-wing design, but the key to flying for this dino was the long fan-like tail. Studded with feathers, the tail would have been used to steady the animal in flight and decrease its speed when landing. It would essentially have to slow down with the tail and pitch its nose up, landing much like a plane does. There is still debate on whether it was only a glider, or if it could flap its wings to take off at will. Lead researcher Luis Chiappe of the Natural History Museum of Los Angeles has his money on flapping because he believes the animal wasn’t built to climb trees.
This discovery is an interesting illustration of evolution in action. Flight as we know it now was still developing 125 million years ago, and there were bound to be things that didn’t work out. The birds of today have two wings, but if Changyuraptor yangi had beaten the evolutionary odds, maybe four wings would be the dominant arrangement. |
The tree-like structures in this scanning electron microscope image of a cross section of a butterfly wing are on the undersides of the Morpho's wing scale ridges. These microribs reflect light to create iridescent colors.
This series of museum labels are designed for general use in your museum or institution to highlight existing connections to nanoscale science, engineering, or technology. NISE Net partners are already coming up with creative ways to use these labels to showcase nano. For example, you can make a scavenger hunt or special tour to encourage visitors to find all the connections! Additional templates (.doc and .indd) are also provided so that you can create your own signage and content.
Mr. O talks about iridescence and Blue Morpho butterflies in another "O Wow" moment at the Children's Museum of Houston.
These 'Do It Yourself' Nano activities and experiments allow families to experience and learn about nanoscale science, engineering, and technology at home or on the go! They are are designed to be done in the comfort of your own home. Each activity includes lists of widely available, inexpensive materials, step-by-step instructions, and detailed explanations. Go ahead, give 'em a try!
"Exploring Structures - Butterfly" is a hands-on activity in which visitors investigate how some butterfly wings get their color. They learn that some wings get their color from the nanoscale structures on the wings instead of pigments.
Visitors will engage in activities showing various natural phenomena that scientists and engineers have emulated to address human problems. Visitors view peacock feathers at different angles to see iridescence, apply drops of water to observe the color changes, and look at other examples of iridescence in nature, such as a blue Morpho butterfly, tropical beetle wings, and abalone shells. Visitors also explore the Lotus Effect by applying drops of water onto Lotusan paint and stain resistant fabrics, two technologies that mimic the Lotus effect. |
Auroral mystery solved: Sudden bursts caused by swirling charged particles
Japan — Auroras are dimly present throughout the night in polar regions, but sometimes these lights explode in brightness. Now Japanese scientists have unlocked the mystery behind this spectacle, known as auroral breakup.
For years, scientists have contemplated what triggers the formation of auroral substorms and the sudden bursts of brightness. Appearing in the Journal of Geophysical Research, the current study overthrows existing theories about the mechanism behind this phenomenon.
The Kyoto-Kyushu research team has revealed that hot charged particles, or plasmas, gather in near-Earth space — just above the upper atmosphere of the polar region — when magnetic field lines reconnect in space. This makes the plasma rotate, creating a sudden electrical current above the polar regions. Furthermore, an electric current overflows near the bright aurora in the upper atmosphere, making the plasma rotate and discharge the extra electricity. This gives rise to the "surge", the very bright sparks of light that characterize substorms.
"This isn't like anything that us space physicists had in mind," said study author Yusuke Ebihara of Kyoto University.
Ebihara based the study on a supercomputer simulation program developed by Takashi Tanaka, professor emeritus at Kyushu University.
Auroras originate from plasma from the sun, known as the solar wind. In the 1970s, scientists discovered that when this plasma approaches the Earth together with magnetic fields, it triggers a change in the Earth's magnetic field lines on the dayside, and then on the night side. This information alone couldn't explain how the fluttering lights emerge in the sky, however.
Scientists had come up with theories for separate parts of the process. Some suggested that acceleration of plasma from the reconnection of magnetic field lines caused auroral breakup. Others argued that the electrical current running near the Earth diverts a part of the electrical current into the ionosphere for some unknown reason, triggering the bright bursts of light. This theory was widely accepted because it offered an explanation for why upward-flowing currents emerged out of our planet. But the pieces of the puzzle didn't quite fit well together.
Tanaka's supercomputer simulation program, on the other hand, offers a logical explanation from start to finish.
"Previous theories tried to explain individual mechanisms like the reconnection of the magnetic field lines and the diversion of electrical currents, but there were contradictions when trying to explain the phenomena in its entirety," said Ebihara. "What we needed all along was to look at the bigger picture."
The current paper builds on earlier work by Ebihara and Tanaka about how the bursts emerge. This explores the succeeding processes, namely how the process expands into a large scale breakup.
The research also has the potential to alleviate hazardous problems associated with auroral breakups that can seriously disrupt satellites and power grids.
The paper "Substorm simulation: Formation of westward traveling surge" will appear 21 December 2015 in the Journal of Geophysical Research, with doi: 10.1002/2015JA021697
Kyoto University is one of Japan and Asia's premier research institutions, founded in 1897 and responsible for producing numerous Nobel laureates and winners of other prestigious international prizes. A broad curriculum across the arts and sciences at both undergraduate and graduate levels is complemented by numerous research centers, as well as facilities and offices around Japan and the world. For more information please see: http://www.kyoto-u.ac.jp/en
Kyushu University is the premier research university in west Japan, a region rich in cultural and economic exchange with East Asia. The university has been engaged since 1911 in education, research, and medical activities at the highest levels since its establishment as the fourth imperial university. By responding to changing times, Kyushu University has helped nurture numerous outstanding individuals, and excels in areas such as engineering, chemistry, medicine, and energy. |
Solving Formulas - Concept
Solving formulas for a variable is a critical skill in the Geometry area unit because many problems will give the area of a polygon and ask for a side, height, or some other dimension. In these cases, simply substituting and typing into a calculator will not yield the correct answer. The successful Geometry student must be capable of substituting into a formula and then solving formulas for the one remaining variable.
When you are in teh area unit, you have to be adept at solving formulas. So what is a formula? Well, in a formula is an equation where you have multiple variables.
So this equation says the area of a rectangle is equal to its base times its height. So you have to be able to look at this equation and solve for different variables. So I'm going to solve this once for b and once for h just to show you different ways of manipulating a formula. Right now it's isolated for a.
So if I'm trying to solve for b, first thing I ask myself is what is happening to that variable that I'm solving for. Am I adding something to it? Am I dividing something? Well, it looks like I'm multiplying it by h. Remember when you have two variables next to each other, the implied operation is multiplication. So how do you undo multiplication?
Well, the opposite of multiplying is dividing. So if I divide by h. Again since I'm trying to isolate b and if I divide by h on the other side, because we have an equal sign, you have to do the same operation to both sides. Each divided by h is 1, so I'm going to write a big 1 there. So we just have b on one side and on the other side a and h can't divide those 2. So it's just going to be a divided by h. So notice that we have isolated b in one step.
Same equation. Let's say we're solving for h. So, if I look at h, it's being multiplied by b. So I'm going to undo multiplying by dividing. So b divided by b is 1, so we find that h is a divided by b.
If we move on to something that has division and multiplication, the triangle formula. Let's say we want to solve this for b. So if I'm looking at b, I'm being multiplied by h and I'm being divided by 2.
So you could do this in whatever order that you want. I think it's easiest to start off by eliminating your fraction. So I'm going to multiply both sides by 2. Since we're dividing by 2, I said to myself that the opposite dividing is multiplying.
So we have 2 times a is equal to b times h. I have not finished solving for b because I have to divide by h. So base is equal to 2 times the area divided by the height. Okay?
Same equation, let's say I want to solve it for h. Well, I'm going to do the same first step which is multiplying by 2 thereby eliminating our fraction. So we have 2a equals b times h, and since I'm solving now for h, I'm going to get rid of the b. So I'm going to divide both sides by b and h is equal to 2 times a divided by b.
Key to solving formulas is always identifying your variable and then asking yourself what are you doing to that variable so you can undo it. |
A number of 50 digits has all its digits equal to 1 except the 26th digit. If the number is divisible by 13, then find the digit in the 26th place.
I get an answer of 8 using the following approach:
A 50-digit integer, N, with all digits equal to 1, can be obtained as follows:
N = (10^50)/9 - 1/9
This can be converted to a 50-digit integer, M, with all 1s except for the digit x in the 26th position, by addition:
M = N + x*10^25
where x is chosen to make M divisible by 13.
If we divide (10^50)/9 by 13 we get a repeating pattern of the six numbers 854700, beginning in the 48th digit to the left of the decimal. This means that the decimal part of the resulting number is 0.854700854700...
If we divide 1/9 by 13 we get this same repeating pattern as follows: 0.008547008547...
Taking the difference (i.e., calculating N/13), we get for the decimal part: 0.8461538462, with the last decimal rounded.
We want an x such that (x*10^25)/13 will have a decimal part that, when added to N/13, gives all zeros to the right of the decimal (i.e., makes M an integer). The integer 8 meets this condition, because 8/13 gives a repeating pattern of the six numbers 615384. Calculating (8*10^25)/13 leads to the the following decimal portion: 0.1538461538 to 10 decimal places. The leading 6 in the series ends up to the left of the decimal. Now we add the decimal portions of N/13 and (8*10^25)/13:
0.8461538462 + 0.1538461538 = 1.0000000000
Thus, M is divisible by 13 if the number 8 occupies the 26th digit.
Posted by NK
on 2004-02-10 10:32:01 |
Planetary protection is a series of measures designed to protect the Earth (see back
contamination) and other bodies in the solar system (see forward
contamination) from cross contamination. The need for planetary protection
was first considered in 1958 (see Committee
on Contamination by Extraterrestrial Exploration), was expressed in
the Outer Space Treaty of 1967, and has
been reconsidered and refined in the light or more recent developments.
For many years, the Space Science Board has served as NASA's primary adviser on planetary protection and quarantine.
|The Apollo 11 astronauts leave the recovery helicopter
and walk across open deck before entering their quarantine chamber
Although NASA has long had procedures in place for handling and treating
spacecraft that will land on other planetary bodies and, in some cases,
return with samples to Earth, these procedures have not always been followed.
The classic example of planetary protection measures not being followed
or implemented well was Apollo 11 –
the first manned vessel to return from the surface of the Moon.
The remote possibility existed at the time that there might be organisms
in the lunar soil which could be transferred to Earth with possibly catastrophic
consequences. Although quarantine procedures had been put in place for the
three returning astronauts, the Apollo capsule, and the lunar samples, these
were either not adequate or were breached in several instances. First the
capsule splashed down in the sea, affording the opportunity for any extremophilic
organisms on the outside of the spacecraft to enter the ocean. Second the
crane on the recovery ship which was supposed to winch the capsule and its
crew aboard was not strong enough. Third, the collected lunar dust was much
rougher than expected and had compromised the seals on the containers the
crew had brought back. So the dust was scattered in the lunar module and
capsule, making the open walk even riskier.
and sample return missions
Great care was taken to sterilize the Viking landers that touched down on Mars in 1976.
Each spacecraft was baked for four hours before launch to kill any terrestrial
microbes adhering to them before launch. However, more recent landers and
rovers, including Pathfinder, Spirit, and Opportunity have not been so rigorously
cleansed. The discovery of the harsh conditions on Mars – its dryness,
coldness, thin carbon dioxide atmosphere, and exposure to lethal ultraviolet
– led to the requirement for absolute sterility being dropped. Also,
the delicate coatings on the high resolution cameras of these later probes
would have been destroyed by the heat of sterilization. The oven treatment
will only be applied in future to spacecraft components, such as the Phoenix
spacecraft digging arm, that bury into the Martian soil. Questions still
remain unanswered about the ability of terrestrial extremophiles to survive
current planetary protection protocols. It has been shown, in the light
of what happened with Surveyor 3 that
even ordinary Earth bacteria can survive for long periods under hostile |
A re-analysis of a 50,000 year old Neanderthal skull shows that, in addition to enduring multiple injuries and debilitations, this male individual was also profoundly deaf. Yet he lived well into his 40s, which is quite old by Paleolithic standards. It’s an achievement that could have only been possible with the help of others, according to new research.
When the remains of this older Neanderthal were discovered at Shanidar Cave in Iraqi Kurdistan in 1957, his many physical injuries and disabilities were immediately apparent. Analysis of his skull showed that he suffered a crushing blow to the head near his eye socket when he was young, likely causing some visual impairment. His right hand and forearm were missing, the result of an amputation. He likely walked with a serious gait, and he suffered from hyperostotic disease (DISH), which is associated with muscular pain and reduced mobility along the spine.
But a new analysis of this specimen, known as Shanidar 1, shows he had another major disability—one not noticed during earlier examinations. New research published in PLOS One reveals that the bony growths found in this Neanderthal’s ear canals would have resulted in serious hearing loss. So this Paleolithic-era hunter-gatherer, according to the updated analysis conducted by anthropologists Erik Trinkaus from Washington University in St. Louis and Sébastien Villotte of the French National Centre for Scientific Research, was profoundly deaf.
“It would have been essentially impossible for Shanidar 1 to maintain a sufficiently clear canal for adequate sound transmission,” noted the authors in the study. “He would therefore have been effectively deaf in his right ear, and he likely had at least partial CHL [conductive hearing loss] in the left ear.” Trinkaus and Villotte say it was “a serious sensory deprivation for a Pleistocene hunter-gatherer.”
Yet despite his deafness and his other physical setbacks, Shanidar 1 died between 40 and 50 years of age (based on dental analysis). By Paleolithic standards, he was an old man. The only way he could have lived to such a ripe old age is by receiving considerable help from others. “More than his loss of a forearm, bad limp and other injuries, his deafness would have made him easy prey for the ubiquitous carnivores in his environment and dependent on other members of his social group for survival,” said Trinkaus in a statement.
His inability to hear would have resulted in reduced communication and diminished social activities requiring coordination, thus making him less effective as a hunter and a forager. It would have been difficult for Shanidar 1 to learn how to fashion tools and use them, and as noted, he would have been more vulnerable to medium and large carnivores (e.g. wolves, large cats, bears).
“[A]n individual with advanced CHL would have been highly vulnerable alone in a Pleistocene foraging context,” write the researchers in their study. “For Shanidar 1, the CHL was associated with loss of function in other aspects of his biology, all of which would have compounded his need for support, even if some of the individual deficiencies by themselves would not have required such assistance.”
Trinkaus and Villotte says it’s not surprising that his fellow Neanderthals were able and willing to provide this level of social support. Profoundly, these extinct humans buried their dead, a funeral act that anthropologists say is indicative of social cohesion, social roles, and mutual support. What’s more, Neanderthals used pigments and feathers to modify their appearance, which the authors say is “a reflection of social identity manipulation and social cohesion.” To say Neanderthals cared for the physically impaired is therefore not a stretch.
Importantly, other examples of prehistoric social support exists in the scientific literature. A study from 2014 revealed a Neanderthal from Spain who suffered from similar hearing loss, and the remains of a five-year-old archaic human with a severe brain deformity who wasn’t rejected at birth.
Our conceptions of Neanderthals, as this new study shows, has now moved well beyond the outdated notion that they were brutish proto-humans who cowered in caves. As we’re learning, the behavioral differences between Neanderthals and modern humans are, in the words of the researchers, “modest” at best. |
Zimbabwe - Poverty headcount ratio
Poverty headcount ratio at national poverty line (% of population)
Definition: National poverty rate is the percentage of the population living below the national poverty line. National estimates are based on population-weighted subgroup estimates from household surveys.
Source: World Bank, Global Poverty Working Group. Data are based on World Bank's country poverty assessments and country Poverty Reduction Strategies.
Topic: Poverty Indicators
Sub-Topic: Poverty rates |
posted on Jan, 20 2010 @ 10:01 PM
Radiometric dating supposedly proves that the Earth is billions of years old. The theory behind radiometric dating sounds very convincing. But does it
actually work in practice? When someone tells us that a certain rock is a billion years old, how can we confirm this? No one was there to see it,
A recent letter-writer says that radiometric dating is proven because many different methods all give the same results. This would be interesting if
true, but it simply isn’t. Many different methods have been proposed to estimate the age of the earth, and they give results ranging from billions
of years (e.g. radiometric methods), to a million or so (e.g. influx of salts into the oceans), to thousands (e.g. decay of the Earth’s magnetic
One researcher, Dr. David Plaisted, searched the technical journals for studies that compared the results of different dating methods on specific
samples. He found only one such study, comparing Potassium-Argon to Rubidium-Strontium, and, he writes, “the results were not good”. He cautiously
concludes, “[A]n assumption of agreement appears to be without support so far.”
There are many examples of disagreement.
Potassium-Argon tests on a lava flow from Rangitoto volcano in New Zealand dated it at 400,000 years. Buried in the lava flow are trees trunks, which
were carbon-14 dated to 225 years.
Five samples from a lava flow in Washington state were dated by Potassium-Argon, giving ages ranging from 340,000 to 2.8 million years. That’s quite
a range! Another dating method gave an even younger age: Eyewitnesses watched that lava flow being formed when Mt. St. Helens erupted in 1980.
Lava flows from Hualalai Volcano in Hawaii were dated at 140 million to 2.96 billion years. In fact Hualalai erupted in 1801.
In some cases the evolutionists offer explanations of what went wrong. They say the lava from Hualalai was under water for many years, which caused
certain chemical and physical effects that contaminated the sample. Maybe so. But are they then telling us that all the other sites that have been
dated to such long ages were never, ever, in all those supposed billions of years, ever under water or otherwise contaminated?
If when you CAN corroborate the evidence, someone is repeatedly proven to be wrong, perhaps you should be cautious about taking their word for it in
cases where there is no way to test their claims. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.