content
stringlengths 275
370k
|
---|
A piece of language is said to be coherent (therefore discourse) if it has a discernible, unified meaning.
A piece of discourse is said to be cohesive if its components (ie. sentences/phrases/words) are bound together through linguistic and non-linguistic features to form a unified whole.
The linguistic features used to link one word/phrase/sentence to another are called formal links.
Some common formal links are:
The most obvious example of formal link is third person pronouns
In a piece of language, cohesion is achieved by using these referring expressions that direct the hearer/reader to look elsewhere for their interpretation.
Reference expressions can be:
Endophoric references are linguistic references to something within the same text.
There are two types of endophoric references:
Anaphoric references refer back to another unit that was mentioned before.
Aiminaibee asked Thakuru to buy her a diamond ring.
Cataphoric references refer ahead to another unit that is mentioned later.
Waving at him happily, Thakuru saw Aiminaibee come out.
Exophoric references refer to entities outside the text, in the context of the utterance or speaker.
That is where Aiminaibee first saw the Foolhudhiguhandi.
(said while pointing to the place)
Repetition of a key term or phrase in the text helps to focus your ideas and to keep your reader/listener on track.
The problem with modern art is that it is not easily understood by most people. Modern art is deliberately abstract, and that means it often leaves the viewer wondering what she is looking at.
Lexical chains are also a form of repetition but without repeating the exact same phrase/word.
i.e. use different words that are lexically related (e.g. hypernyms)
Myths are an important part of a country’s heritage. Such traditional narratives are, in short, a set of beliefs that are a very real force in the lives of the people who tell them.
Cohesion is often achieved by substituting special words for ones that have already been used.
The most common substitutes used in English are
Each of these are used to substitute for a different type of clause
‘one’ is used to substitute for nouns / noun phrases:
I left the school and went to the one in Thuraakunu.
I left Hithadhoo secondary school and went to the Thuraakunu one.
I left the Hithadhoo secondary school with many students and went to the one with few students.
I left the Hithadhoo secondary school with few students and went to the Thuraakunu one with few students.
Verbs are substituted with ‘do’.
Since do is a verb (and an irregular one at that) is also has the forms does, did, done and doing.
I have not finished yet, when I do you can start.
I like coffee and so does he.
The word ‘so‘ is often used to substitute for a whole clause:
Thakuru: “We’ll be watching you close, smart guy.”
Haadi: “I hope so. You might learn something.”
Thakuru: “I think we have got rid of him for good.”
Aiminaibee: “You really think so?”
In certain contexts it is possible to leave out a word/phrase rather than repeat it.
This device is called ellipsis.
A child learns to speak almost ‘by chance’. He imitates his parents without knowing why < >.
Students continue to wear faded jeans to class even after being told not to < >.
Connectives are words/phrases used to indicate a specific connection between different parts of a text.
Various kinds of words and phrases can function as connectives:
It posed several problems for me, but it was all worthwhile.
It posed several problems for me; nevertheless, it was all worthwhile.
In spite of the severe problems it posed for me, it was all worthwhile.
There are 4 basic types of connectives:
Addition connectives (AC)
adds on to the idea presented before (also, and)
Opposition connectives (OC)
contrasts with the idea presented before (but, nevertheless )
Cause connectives (CC)
shows a causal connection with the ideas presented before (therefore, since)
Time connectives (TC)
shows a sequence or simultaneous actions (first, finally) |
Vitamin A was one of the earliest vitamins to be discovered – hence its top rank in the alphabetical vitamin nomenclature.Vitamin A is a family of fat soluble compounds that play an important role in vision, bone growth, reproduction, and immune system regulation. Most people associate vitamin A with carrots, and for good reason: the common orange veggie has high amounts of beta-carotene, which is actually a vitamin A precursor and also the reason carrots got their name. But vitamin A is actually a group of chemicals that are similar in structure, and include retinol (the most biologically active form of vitamin A), retinal, and retinoic acid.
β-carotene is slightly different in that it is cleaved in the intestinal mucosa by an enzyme to form retinol. Other carotenoids include lycopene and lutein but, although similar to vitamin A, they are not actually vitamin A in the truest sense. One distinction is that excessive amounts of vitamin A from over-supplementation, can cause toxicity (although deficiency is much more common). On the other hand, β-carotene does not cause vitamin A toxicity because there exists a regulatory mechanism that limits vitamin A production from beta carotene when high levels are ingested.
A large number of physiological systems may be affected by vitamin A deficiency which is most often associated with strict dietary restrictions and excess alcohol intake. Patients with Celiac disease, Crohn’s disease and pancreatic disorders are particularly susceptible due to malabsorption. Vitamin A is also essential for the developing skeletal system and deficiency can result in growth retardation or abnormal bone formation.
The functions of vitamin A are very diverse:
- Eyesight: Vitamin A forms retinal, which combines with a protein (rhodopsin) to create the light-absorbing cells in the eye. This explains why a common clinical manifestation of deficiency is night blindness and poor vision.
- Skin: In addition to promoting healthy skin function and integrity, vitamin A regulates the growth of epithelial surfaces in the eyes and respiratory, intestinal, and urinary tracts. Deficiency impairs epithelial regeneration, which can manifest as skin hyperkeratization, infertility, or increased susceptibility to respiratory infections.
- Anemia: Vitamin A helps transfer iron to red blood cells for incorporation into hemoglobin; thus, a vitamin A deficiency will exacerbate an iron deficiency.
- Weight management: Vitamin A reduces the size of fat cells, regulates the genetic expression of leptin (a hormone that suppresses appetite), and enhances the expression of genes that reduce a person’s tendency to store food as fat.
- Cancer prevention: Vitamin A deficiency impairs the body’s ability to launch cell-mediated immune responses to cancer cells. Vitamin A inhibits squamous metaplasia (a type of skin cancer) and inhibits breast cancer cell growth.
- Fertility: Vitamin A plays a key role in the synthesis of sperm.
- Autism: Vitamin A is part of the retinoid receptor protein (G-alpha protein), which is critical for language processing, attention, and sensory perception. Some autistics have a defect in this protein that vitamin A supplementation can modulate.
- Sleep: Vitamin A deficiency alters brains waves in non-REM sleep, causing sleep to be less restorative.
Vitamin A also interacts with other micronutrients. For example, zinc is required to transport vitamin A into tissues, so a zinc deficiency will limit retinal binding protein (RBP) synthesis and thus limits the body’s ability to use vitamin A stores in the liver. Oleic acid, a fatty acid found in olive oil, facilitates the absorption of vitamin A in the gut.
Find out if you have a vitamin A deficiency, and take steps to correct it, by ordering a micronutrient test today. |
Why is New Hampshire Called the Granite State?
New Hampshire is a small state that is part of the New England region in northeast United States; it is popularly known as the Granite State. Why the Granite State? The real significance of this name is to examine the land, people, and history of New Hampshire.
Pilgrims arrived from England in 1620 and built one of the earliest European settlements; the name New Hampshire was derived from the county of Hampshire in southern England. New Hampshire was once part of thirteen colonies under British rule. The American War of Independence broke out in 1775 when the thirteen British colonies sought freedom from Great Britain.
New Hampshire’s state motto ‘Live Free or Die, is a tribute to General John Stark who led the American forces during the American Revolutionary War.
New Hampshire’s traditional rock is granite; as early as the 1800s New Hampshire was known for its abundance in granite with many small quarries scattered along the south; its largest quarry is in Rattlesnake Mountain in Concord. The Library of Congress in Washington D.C. and Boston’s Quincy Market were built from New Hampshire granite. New Hampshire is identified by its rocky soil; its bedrock is rich in granite and its landscape filled with granite outcroppings. The White Mountains in the northern region covers a quarter of New Hampshire also rich in granite. A famous natural landmark in the United States is the ,,Old Man of the Mountains, also ‘the Great Stone Face , can be found in the White Mountains; this is an amazing rock formation of a man’s profile made of five granite ledges.
Granite is a hard and tough rock which signifies the nickname ‘Granite State, characterizing the people of New Hampshire with its history of strong and resilient men during the American Revolutionary War. Granite which is often identified with New Hampshire shaped the land and the early industry of the United States. |
Climate Variability: Why Can’t We Talk About It?
The Earth’s climate comprises one of the most complex natural systems ever known. After decades of study, we continue to learn new and important information. Unfortunately, climate change has become such a partisan issue that recognizing any reason for this change besides a human influence is vilified as anti-science. Raising reasonable, fact-based inquiries should not be met with hostility. Such questions should be welcomed by scientists whose responsibility is to consider all sides of the scientific debate so that their research can influence policy decisions.
For instance, legitimate questions still remain about the Earth’s natural climate processes and cycles, also known as “climate variability.” There are a number of factors to consider that have an effect on Earth’s climate. Some of these causes of climate change include:
- solar radiation;
- atmospheric optical properties;
- cloud cover;
- vertical and horizontal wind;
- surface roughness;
- ocean currents;
- soil moisture;
- precipitable water vapor;
- surface vapor;
- ice area;
- and albedo.
Much more needs to be known about the impact of these natural phenomenon.
Earth based variations, such as multi-year weather patterns like El Niño, affect climate. These natural cycles between the ocean and our atmosphere can alter weather patterns around the world for years. El Nino events are often associated with global warm temperature trends, as seen during and after the 1997 El Nino and the 2015 El Nino. The warm temperatures linked with these cycles are often used as evidence of climate change yet legitimate questions remain about the influence of long-term weather patterns like El Nino on climate change.
Likewise, external variations, such as the variable amount of radiation from the sun, also impact Earth’s climate. Changes in the solar intensity of the sun over long periods of time probably cause ice ages. Yet these natural and non-human variations are routinely downplayed by the media and kept from the public. This is not to say that greenhouse gases from human activity do not play a role; they do. But all of these components have an influence and deserve scrutiny from the scientific community.
Also, we should recognize limitations to what we know about natural variability. One such limitation is that we lack the scientific knowledge needed to determine how the Earth responds to different amounts of carbon dioxide in the atmosphere. For example, when science could not explain the global warming hiatus from 1998 to 2013, some scientists speculated that the oceans were absorbing more warm temperatures than previously believed.
Likewise, much debate remains when predicting extreme weather events. Many alarmists claim that the frequency of extreme weather events will escalate in the future due to climate change. However, these claims are not confirmed by weather data. A majority of extreme weather events such as hurricanes, tornadoes, droughts, and floods have not increased, which proves that some scientists have a hard time accepting the facts.
It is clear there is still more to learn about Earth’s climate. Instead of only focusing on the effects of human actions, we would be better served by continuing to research the full scope of issues impacting Earth’s climate. Evaluating and analyzing natural cycles will better inform how we respond and what actions might be taken. Scientists should not limit their understanding by only considering causes of climate change that fit their slanted worldview. To begin to understand the scope of climate science, scientists must investigate all reasonable, science-based approaches. This is the only way policymakers will have the information they need to make good decisions on climate change.
Congressman Lamar Smith represents the 21st district of Texas in the House of Representatives and is the Chairman of the House Science, Space, and Technology Committee. |
The Evolution Deceit
On 1 January, 1924, Britain’s Anna Mitchell Hedges discovered a crystal skull beneath an alter in a pyramid temple in the lost Mayan city of Lubaantun (meaning the city of the fallen stones in the Mayan language).
The skull was the same size as a genuine human skull and consists entirely of transparent quartz.
Since the crystal contained no carbon, the skull was subjected to a range of tests by scientists from the world-renowned company Hewlett-Packard. The resuts stunned the scientists. One of them described these unbelievable results in the words, “This skull should never have existed!”
The results that revealed that the crystal skull could only have been made using advanced technology are as follows:
1. A team of scientists revealed that the skull has been made out of a form of quartz known as piezoelectric silicon dioxide, used in the current telecommunications sector and with a higher memory capacity than other. Latest micro-processors are made from the same substance. Even more striking, however, is the fact that this form of crystal was only discovered in the 19th century.
2. This crystal, piezoelectric silicon dioxide, is both negative and positively polarized. This means that, as with batteries, it is able to produce its own electricity.
3. Scientists used a series of polarized test lights to establish that the cranium and the lower jaw of the skull, in the form of two separate components, were made from the same block of crystal rock. Considering that quartz crystal is softer and more brittle than diamond, the fact the skull was carved from a single piece of crystal, which is almost impossible, amazed scientists.
4. Under the microscope, scientists found no trace suggesting that modern automatic equipment or mechanical devices had been used on the skull.Scientists concluded that it was impossible to produce such a delicate and fine component as the lower jaw, from a single piece of crystal, even using modern diamond-tipped electrical equipment, without shattering it.
5. Scientists calculated that the crystal skull could have been made without any equipment, abrading it with a piece of diamond, but that this would have needed several generations over a period of some 300 years.
6. Present-day crystals are carved around their axes. Because crystals have a molecular symmetry. In order not to break the crystal, it has to be cut in line with that natural structure, its molecular symmetry, in other words. Even if lasers of high-tech cutting techniques are used, crystals will still shatter if not cut along their natural axes. But even though this crystal skull was cut in a manner totally independent of its axis, no fracturing or cracking arose, in complete violation of the laws of physics.
7. Scientists were also astonished by the skull’s optical features.As a result of the Hewlett-Packard tests, scientists realized that the skull had interesting optical properties. Light applied from beneath the skull should normally be refracted in all directions, but in this skull it formed a channel focusing on the eye sockets and emerging from these.
8. Another startling optical is the prism lower to the lower rear section of the skull. All light rays striking the eye sockets are reflected from this prism. Therefore, when you look directly into the sockets you can see the whole room inside the eyes of the crystal skull.
Structures that present-day technology still struggles to account for, such as Stonehenge in England, constructed 8,000 years ago, the Egyptian pyramids, the T-shaped animal motifs carved 11,000 years ago on Gobekli Tepe in Urfa, Turkey and the 10-ton Sun gate carved from a single rock, prove that people in ancient times were not primitive and lacking any comprehension of art, science and technology, as is sometimes claimed. Evolutionists have attempted to apply the same perverse, evolutionary logic they sought to apply to such branches of science as biology, paleontology, and zoology to archeology, as well. But as with this crystal skull, artifacts left behind from people who lived in the past scientifically refute the evolutionist claim that ape-like beings gradually developed into today’s man2011-03-28 17:54:26 |
SCIENCE BEHIND ZIP LININg
One question that every student has wondered at some point in their schooling is, “When am I ever going to actually use this stuff?” The Science Behind Zip Lining website serves as one answer to that question. Its purpose is to demonstrate how the Science, Technology, Engineering, and Mathematics (STEM) principles that you have learned apply to real-world situations.
Design of a Zip Line
Every zip line consists of a trolley attached to a steel cable that is typically covered with a vinyl coating. For safety, the rider should be wearing a helmet, gloves, and a harness which is used to keep the rider attached to the trolley. Gravity propels the rider from start to finish.
The first step when designing a zip line is to identify the parameters involved. Some of the questions that need to be answered are:
- How long of a distance will the zip line span?
- How high should the start and end points be?
- What is the topography of the ground relative to the zip line cable?
- What will the start and end points be attached to and how can they be secured?
- How much tension should there be to give the zip line an appropriate slope?
- What will the weight limitations be?
* WARNING: Always consult with an industry professional before attempting to design or build your own zip line.
The zip line illustrated below covers a horizontal distance of 255 meters (m) and has a vertical drop of 16 m to the lowest point. The start point is 13 m above the ground and the end point is 11 m above the ground. Both points have been securely anchored using trees. Notice that the topography of the ground has a slightly downward grade from the start point to the end point. There is an elevation drop of 12 m. This helps achieve a sufficient slope, approximately 4 degrees in this example, without causing the cable to be extremely high off the ground at the start point. For safety reasons, a weight range of 70 to 250 pounds is acceptable for a zip line this size. |
The bloodiest four years in American history begin when Confederate shore batteries under General P.G.T. Beauregard open fire on Union-held Fort Sumter in South Carolina’s Charleston Bay. During the next 34 hours, 50 Confederate guns and mortars launched more than 4,000 rounds at the poorly supplied fort. On April 13, U.S. Major Robert Anderson surrendered the fort. Two days later, U.S. President Abraham Lincoln issued a proclamation calling for 75,000 volunteer soldiers to quell the Southern “insurrection.”
As early as 1858, the ongoing conflict between North and South over the issue of slavery had led Southern leadership to discuss a unified separation from the United States. By 1860, the majority of the slave states were publicly threatening secession if the Republicans, the anti-slavery party, won the presidency. Following Republican Abraham Lincoln’s victory over the divided Democratic Party in November 1860, South Carolina immediately initiated secession proceedings. On December 20, the South Carolina legislature passed the “Ordinance of Secession,” which declared that “the Union now subsisting between South Carolina and other states, under the name of the United States of America, is hereby dissolved.” After the declaration, South Carolina set about seizing forts, arsenals, and other strategic locations within the state. Within six weeks, five more Southern states–Mississippi, Florida, Alabama, Georgia, and Louisiana–had followed South Carolina’s lead.
In February 1861, delegates from those states convened to establish a unified government. Jefferson Davis of Mississippi was subsequently elected the first president of the Confederate States of America. When Abraham Lincoln was inaugurated on March 4, 1861, a total of seven states (Texas had joined the pack) had seceded from the Union, and federal troops held only Fort Sumter in South Carolina, Fort Pickens off the Florida coast, and a handful of minor outposts in the South. Four years after the Confederate attack on Fort Sumter, the Confederacy was defeated at the total cost of 620,000 Union and Confederate soldiers dead. |
This material may be copied only for noncommercial classroom teaching purposes, and only if this source is clearly cited.
FINDING THE AGES OF ROCKS & FOSSILS
by Larry Flammer
|NOTICE: If a student ever challenges you with criticisms of the reliability or validity of geological age-dating methods, CLICK HERE|
|This lesson should effectively and accurately
inform students about the high level of confidence we have in
the geological ages of an old Earth. At the same time, it should
reveal an example of pseudoscience which should be part of any
effort to improve science literacy and critical thinking.
Students are taken through a combination of some background information and interactive experiences, and checked frequently by questions to confirm understanding. The narrative includes concepts of isotopes, radioactive decay, half-life, mineral formation, age analyses, Fair Test questions, and isochrons. The lesson can be used as a one-day team activity, individually in class, or as a self-teaching homework assignment. It is intended to either stand by itself, or to serve as a useful introduction to the very effective online interactive Virtual Age Dating Tutorial. This lesson would be helpful in Biology, Earth Science, Physical Science, Physics, Chemistry, or Geology classes.
|1. Several independent
lines of evidence confirm that the Earth is billions of years
2. A Fair Test analysis confirms that the Earth is billions of years old.
3. Half-life is a fundamental property of radioactive material, enabling accurate age-dating.
4. Methods exist for age-dating which are internally self-checking.
5. Efforts to overturn a massive body of work must be equally compelling.
6. Anything presented as scientific, yet clearly ignores the rules of science, is pseudoscience.
However, some of those preferring the "young Earth" idea have attempted to seek "proof" of their position by presenting "scientific studies" which seem to undercut the established scientific conclusions. In addition, this position is widely publicized, suggesting there is (untrue) widespread uncertainty in the science community about the validity of those ancient ages. As a result, the general public, even those not particularly committed to the Young Earth position, are often not aware of the clear status of geological ages, and the diverse body of evidence pointing to such deep time.
It turns out, upon closer scrutiny, that the "scientific evidence" for a young Earth is nothing more than selective reporting of a handful of samples which were either poorly analyzed, or notable exceptions to the huge body of data pointing to a very old Earth. Such exceptions can be variously explained in ways that do not require a young Earth solution.
This lesson should effectively and accurately inform students about the high level of confidence we have in the geological ages of an old Earth. At the same time, it should reveal an example of pseudoscience which should be part of any effort to improve science literacy and critical thinking.
STRATEGY AND PREPARATION:
5. Run off enough copies of the Deep Time Activity #15
Cutouts sheet so that you (or lab assistant, or students
in class) can cut apart the right hand column of 17 strips, shuffle
them, and place them along with the intact left hand column in
an envelope or small zip-lock bag, so there will be one set (bag)
for each team of 4 (or team of 2, if preferred) in your largest
class. Use the intact sheet to make an overhead showing these
as All Known Original Radioisotopes With Half-Lives
of 1 Million Years or More, properly sequenced
8. When each team has completed activity #15, and answered
questions 16-19, collect that team's envelope.
12. Try to get class consensus that the studies of Deep Time are very reliable and well-established, based on an overwhelming amount of quality science and very compelling work. The few discrepancies found can easily be attributed to factors which do not destroy the general picture, and in any case are generally very minor. Also point out that any statement to the contrary would have to explain away this huge body of evidence. You might also point out (especially if you can reference some of the alleged "science" which attempts to "disprove" the ancient Earth consensus), how either poor science was used (e.g. selecting favored data, and ignoring all the other data), or they are examples of pseudoscience.
EPILOG AND COMMENTS:
2. Encourage students (especially those who find it difficult to let go of the young Earth idea) to explore the issue further. Suggest they carefully read the material suggested in the references (some of it online), and perhaps prepare a report of their careful comparison of the ideas. Be sure they include a fair balance of "Young Earth" "evidence" and the ancient Earth evidence, what each side criticizes about the other, and what the counter-arguments are. Encourage the use of Fair Test questions wherever possible, along with the answers to those questions, and how they affect the conclusions.
BETA TESTING: Since this lesson has not been extensively classroom tested, if you like the idea, try it as a "Beta Tester", and please get back to us. Let us know how it goes, any problems, questions, suggestions for improving it, etc. We will share your experiences with other teachers. Contact us through the webmaster.
EXTENSIONS & VARIATIONS:
2. In addition, be sure to install a geological timeline in your classroom, something students can view throughout the year, and to which you can often point when talking about or showing something in prehistoric time. Some excellent ideas for this can be found in the "Time Machine" lesson.
3. To help students gain a more realistic personal sense of deep time (especially in middle school life science or earth science), try our Patterns in Time lesson. In that lesson, students also come to realize that the different vertebrate classes emerged separately over several 100s of millions of years, and did not exist prior to their emergence (as revealed in the fossil record). That lesson also demonstrates the accumulation of modified traits on top of the accumulated traits found in the previously emerged group, showing gradual, additive and mosaic changes over time. All of this provides a strong implication that each group descended from the earlier antecedents through gradual change over time.
4. We now have (2004) a nice lesson which provides a simulated rock-dating experience. Try it: "Date-a-Rock!"
Some of the ideas in this lesson may have been adapted from earlier, unacknowledged sources without our knowledge. If the reader believes this to be the case, please let us know, and appropriate corrections will be made. Thanks.
|Lesson created by: Larry Flammer, September 2002. Based mainly on material presented in Miller's Finding Darwin's God, chapter 3. This lesson was intended primarily to serve as an introduction to the interactive online Virtual Age Dating Tutorial, but can also be used as a stand-alone lesson.| |
Albert Einstein was a renowned physicist and remains one of the most famous scientists in the world to this day. His findings, especially his General Theory of Relativity, completely re-shaped the way the world views the universe.
Einstein was born on March 14, 1879 in Ulm, Germany. Except for mathematics, he hated school and dropped out at the age of 15. Einstein left his German citizenship behind and moved to Switzerland to avoid the military draft. Here, he attempted formal schooling again and studied mathematics and physics at Zurich Polytechnic, where he graduated in 1900. Einstein applied to several universities for advanced study, but wasn't accepted anywhere.
In 1905, Einstein submitted five influential papers to the German physics journal Annals of Physics. The first paper explained the “photoelectric effect” and eventually earned him the Nobel Prize in 1921. The second paper, for which Einstein was awarded a doctorate from Zurich Polytechnic, addressed how to measure molecules. His third paper explained Brownian motion, or the movement of tiny particles suspended in liquid. Einstein’s fourth paper, which covered his Special Theory of Relativity, stated that time and space are relative to the observer. His fifth paper posited mass as a form of energy.
Einstein wrote four of these papers in under one year while working a low-paying job as a patent office clerk. Two years later, he quit his job when he was offered a professorship at Zurich Polytechnic. By 1913, Einstein was employed at Berlin University, where he conducted his own research.
It was Einstein's fifth paper that brought him widespread fame. In this paper, he stated that mass is energy in a different form, which he illustrated with the equation E=mc2. This equation, energy equals mass times the speed of light squared, revolutionized global thinking about how radiation works.
In 1915, 10 years after he published his paper on the Special Theory of Relativity, Einstein published his General Theory of Relativity. According to this theory, an object with significant mass causes distortion in space-time. The larger the object, the greater the distortion. For example, the reason planets orbit the sun isn’t because they are affected by its force; rather, it's because the sun’s mass curves space-time.
On May 29, 1919 an astronomer named Sir Arthur Eddington observed a solar eclipse that showed the stars and sun positioned just as Einstein’s Theory of Relativity had predicted. And in 1929, Einstein's General Theory of Relativity helped scientist Edwin Hubble prove that the universe was expanding.
Despite his many brilliant theorems, Einstein's Unified Field Theory, which explained the laws of the universe, would never prove plausible. In it, he left out an important assumption: the “Uncertainty Principle.” Einstein never accepted the Uncertainty Principle, which states that math cannot predict exactly where a particle is but can make a close prediction. The first version of his Unified Field Theory came out in 1929 and received mostly negative attention; many scientists thought Einstein’s refusal to accept quantum theory’s Uncertainty Principle hindered his credibility.
Following this criticism, Einstein created a second version of his Unified Field Theory in 1950, but it was still ignored by many theoretical physicists. Five years after he published the second version of this theory, in April of 1955 at the age of 76, he passed away.
Even though many—if not most—people don’t completely understand Einstein’s theories, he is still considered one of the most brilliant thinkers in history.
[Source: The Great Scientist] |
The Goldilocks Zone refers to the habitable zone around a star where the temperature is just right - not too hot and not too cold - for liquid water to exist on an planet.
Liquid water is essential for life as we know it. Where we find liquid water on Earth we also find life.
"The only life we know about is our carbon-based life, and water plays a crucial part in our own existence, and so it's only natural that we direct our attention to planets in locations capable of having liquid water," Professor John Webb of the University of New South Wales said.
There are at least a dozen or so potentially habitable exoplanets, planets which are in varying degrees similar to Earth.Professor John Webb
"There's plenty of life on Earth and there's plenty of water, but we've yet to find life on other planets even in our own solar system."
Looking for planets in the Goldilocks Zone is a way that allows scientists to hone in their search for Earth-like planets that could contain life.
Basically, the assumption is that if it's possible there may be liquid water on the planet, then it's also possible that the planet may be habitable.
Goldilocks Zones in other star systems
"The location of a Goldilocks Zone around another star depends on the type of star," Professor Webb said.
Bigger hotter stars have their Goldilocks Zones further out, while smaller cooler stars such as M-type red dwarf stars have habitable zones much closer in.
Red dwarfs are the most common type of star in the Milky Way galaxy, and have very long life expectancies.
"This means life should have lots of time to evolve and develop around such as star," he said.
Observations by the European Southern Observatory's High Accuracy Radial velocity Planet Searcher, has concluded about 40 per cent of red dwarfs have super-Earth class planets orbiting in their habitable zone.
Alternatively, NASA's planet hunting Kepler space telescope searches for planets orbiting in the habitable zones of Sun-like stars by looking for planets with an average 365-day orbit.
More than just temperature
Just because a planet or moon is in the Goldilocks Zone of a star, doesn't mean it's going to have life or even liquid water.
After all, Earth isn't the only planet in the Sun's Goldilocks Zone - Venus and Mars are also in this habitable zone, but aren't currently habitable.
"Venus is Earth's sister planet, both are about the same size and in the same region of the solar system, and Venus once also had water," Professor Webb said.
"However, Venus now has a runaway greenhouse effect going on, with a surface temperature of over 460 degrees Celsius, which has boiled away all its liquid water."
At the other end of the Sun's Goldilocks Zone is Mars which also once had liquid water flowing across its surface in rivers, lakes and oceans.
"However, the Red Planet is now a freeze-dried desert, with a thin carbon dioxide atmosphere, and only one 99th the atmospheric pressure of sea level on Earth," Professor Webb said.
"The lack of both a significant atmosphere and a global magnetic field - thanks to its mostly solidified core - means the Martian surface is constantly being irradiated by the Sun.
"Any water still on Mars, which hasn't degassed into space and been blown away by the solar wind, or irradiated into hydroxyls on the surface, is either frozen in the planet's ice caps and permafrost, or quickly subducts directly from ice to gas during the local Martian summer."
While there is some evidence pointing to the possible existence of subsurface salt water brines which can seep to the surface, we're yet to find any life on the Red Planet.
"Finally, and much closer to home, we have a third terrestrial world, the Moon, which has virtually no atmosphere, just the hint of a dusty exosphere above its surface, and with the only water being either locked up as ice on the shaded floors of deep craters, or as hydroxyls on the irradiated lunar surface, and definitely no life," Professor Webb said.
Getting it right is hard
"If you want to calculate the average temperature that some exoplanet has, given its distance from its host star, you actually need to know a lot about that exoplanet, including the kind of atmosphere it has, the reflectivity of its clouds, and whether it has any kind of greenhouse effect," he said.
"And the trouble is you actually don't know those things, so the calculations can give you the wrong answers."
Professor Webb said Earth and Venus could be good examples of getting it wrong.
"If you perform a simple calculation for Earth, taking into account the apparent reflectivity of our clouds, that is the sunlight that's being reflected back into space without heating the surface, and you ignore the effect of greenhouse gases, you can actually get the wrong answer and conclude that Earth is not habitable," he said.
"And if you calculate the mean surface temperature of Venus based only on the reflectivity of its cloud cover, then one would expect a surface temperature of minus 10 Celsius, over 470 Celsius less than its actual surface temperature.
"There are at least a dozen or so potentially habitable exoplanets, planets which are in varying degrees similar to Earth," Professor Webb said.
For these reasons, he said we should relax our definition of the Goldilocks or the habitable zone around stars somewhat, or we could miss a major discovery.
"Ultimately when the technology and methodology improves, we will be able to measure any atmosphere around these planets, and that might give us some clue to what's really going on there, but right now these things are very hard to do." |
Learn something new every day More Info... by email
An inhibitory postsynaptic potential (IPSP) is a signal sent from the synapse of one neuron, or nerve cell, to the dendrites of another. The inhibitory postsynaptic potential changes the charge of the neuron to make it more negatively charged. This makes the neuron less likely to send a signal to other cells.
When a neuron is at rest, or not affected by any signals, it has a negative electrical charge. An inhibitory postsynaptic potential hyperpolarizes the neuron, making its charge even more negative, or further from zero. An excitatory postsynaptic potential depolarizes the neuron, which makes its overall charge more positive, or closer to zero.
Changes in the electrical charge of the neuron are caused when neurotransmitters, chemicals that nerve cells use for signaling, are released from a nearby cell and bind to the neuron. These neurotransmitters cause gated ion channels to open, allowing electrically charged molecules to flow in or out of the cell. An inhibitory postsynaptic potential is caused by either positively charged ions leaving the cell or negatively charged ions entering it.
A neuron is shaped like a tree, with a cell body at the top from which dendrites extend like the branches on a tree. At the other side of the neuron, a long trunk or axon extends toward other neurons. The axon ends in the axon terminals or synapses, which send chemical signals across a space called the synaptic cleft. These chemical signals bond to the dendrites of other neurons and cause excitatory or inhibitory postsynaptic potentials.
A single neuron may receive many signals from other neurons, some excitatory and some inhibitory. These signals are summed spatially and temporally at the axon hillock, a small hill at the beginning of the axon. The farther a signal has to travel to reach the axon hillock, the less effect it will have. Also, the longer the excitatory or inhibitory postsynaptic potential lasts, the greater effect it will have when it reaches the axon hillock.
If there are enough excitatory postsynaptic potentials to make the neuron much more positively charged, it will fire an action potential. An action potential is an electrical signal sent down the axon of the neuron. It causes the synapses at the end of the axon to release neurotransmitters, which send signals to other neurons. Too many inhibitory postsynaptic potentials can cancel out the effect of excitatory potentials, however, and prevent an action potential.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
From two-years-old through 8th grade, Stevens students actively engage with scientific methods in a curriculum that encompasses biology, chemistry and physics. Through the consistent practice of inquiry, research and experimentation in each grade, students become increasingly adept with formulating hypotheses, drawing evidence-based conclusions and presenting findings.
Active engagement in science builds other competencies as well. Whether it is testing the melting point of snow in the three-year-olds classroom or engineering a simple machine in 4th grade, persevering through setbacks and working through a scientific problem to its conclusion helps students become more resilient as innovative thinkers.
Each campus has a dedicated science lab that is utilized for experiments and projects such as the annual Science Expo, a showcase of scientific studies conducted by middle school students. These are some recent hypotheses formulated, investigated and presented by Stevens students at the 2016 Science Expo:
●How does titration help determine the concentration of a solution?
●Which wavelength of light is best for growing Wisconsin fast plants?
●Which chemical reaction is most effective in producing carbon dioxide gas for a car airbag?
●How does temperature affect the formation of a Painted Lady butterfly chrysalis?
●What percent of the population has an abnormal pupillary reflex? |
Forget finding Earth 2.0. Some astronomers aim for more exotic fare, searching for exoplanets around white dwarfs and pulsars.
In roughly 5 billion years, our normal, yellow Sun will puff up into a red giant. Our star will engulf the orbits of some of the inner, rocky planets (whether Earth will survive or not is debated) and lose half its mass in winds before it collapses into a white dwarf remnant even smaller than Earth. Even the outer planets won’t be safe — the Sun’s mass loss could send previously stable planets into wild orbits, perhaps careening into the Kuiper Belt on long, elliptical paths. The prospects for life’s survival after all of that are not great, but let’s say we endured. (Maybe humanity camped out on Saturn’s moon, Titan.) What would we see?
White Dwarfs, Planets, and AsteroidsFinding planetary systems around white dwarfs isn’t easy. Astronomers can’t use the usual planet-hunting methods: white dwarfs are too dim for current telescopes to catch the dip caused by a transiting planet.
Instead, astronomers have to look at the white dwarfs themselves. When a white dwarf forms, it pulls all heavier elements inside its core, leaving an atmosphere of pure hydrogen and helium. Yet heavy elements “pollute” at least a quarter of white dwarf atmospheres. The only way for those elements to get there is if they’re accreted after the white dwarf has already formed. And that requires planets.
A gravitational bump from a planet can send smaller objects, such as asteroids, hurtling inward. When they pass too close to the white dwarf, they shred into dust particles, coalesce into a disk around the stellar remnant, and eventually accrete onto its surface.
Sometimes that disk is visible, and in these cases the connection between pollution and perturbed asteroids is clear-cut. But the disk is only visible in 1% of white dwarfs, and without a disk, the connection between pollution and planets is less clear.
So John Debes (Space Telescope Science Institute) organized a spectroscopic survey of polluted white dwarfs using the Magellan telescope at Las Campanas in Chile. Based on the statistical analysis of 30 white dwarfs, he found that the polluting material has to come from a disk around the stellar cinder. The disk circles the white dwarf tightly enough that all of it lies within the tidal disruption radius, which suggests shredded asteroids are the disk’s source material. Debes’ observations make a connection between pollution and planets in all cases, not just when the disk is visible.
Ben Zuckerman (University of California, Los Angeles) used Keck to find polluted (and therefore likely planet-hosting) white dwarfs in the rich Hyades star cluster in Taurus. Such tightly packed environs are not ideal for planet formation because the close passage of other stars might disrupt planetary systems. Nevertheless, one of the 10 targets (named LP 475-242) showed the pollution Zuckerman was looking for, suggesting the presence of planets. This is the first potential planetary system found in the Hyades cluster.
“I think it’s amazing that we’re seeing planets surviving stellar evolution to show up as planetary systems around these dead stars,” Debes said about Zuckerman’s work. Zuckerman and Debes presented their work yesterday at the American Astronomical Society meeting in Long Beach, California.
Pulsar PlanetsWhite dwarfs aren’t the only stellar cinders to host planets. Pulsars, the spinning compact cores remaining after massive stars explode, host planets too. Pulsar jets act like searchlights from a lighthouse, passing our way at what should be very regular intervals; if the pulses are timed accurately enough, tiny irregularities can reveal the presence of Earth-size planets, and even moon-size objects. Some of these objects survived the stellar explosion, but in other systems, the planets might have formed afterward out of the explosion debris.
Only three pulsar planetary systems have been found so far, and every one has broken a record. The first confirmed exoplanets were found orbiting pulsar PSR B1257+12 in Virgo. The first circumbinary planet was found circling a pulsar and a white dwarf. And last year, in the strangest case of all, pulsar PSR J1719-1438 was found hosting a “diamond” planet, which turned out to be a white dwarf star stripped of its outer layers until it weighed as little as Jupiter.
While Kepler stays busy announcing thousands of planet candidates, one might wonder why only three planetary systems have been found around pulsars. Aleksander Wolszczan (Penn State University) explains that’s probably because astronomers haven’t been looking in the right way. It requires very sensitive measurements made very frequently by large telescopes, a combination difficult to achieve in practice. |
Contents - Previous - Next
This is the old United Nations University website. Visit the new site at http://unu.edu
2. The changing coastlines of indonesia
Although there has been geomorphological research on several parts of the Indonesian coastline, the coastal features of Indonesia have not Yet been well documented. The following account-based on studies of maps and charts, air photographs (including satellite photographs), reviews of the published literature, and our own traverses during recent years-is a necessary basis for dealing with environmental changes on the coasts of Indonesia. Coastal features will be described in a counter-clockwise sequence around Sumatra, Java, Kalimantan, Sulawesi, Bali and the eastern islands, and Irian Jaya. Inevitably, the account is more detailed for the coasts of Java and Sumatra, which are better mapped and have been more thoroughly documented than other parts of Indonesia. In the course of description, reference is made to evidence of changes that have taken place, or are still in progress.
Measurements of shoreline advance or retreat have been recorded by various authors, summarized and tabulated by Tjia et al. (1968). Particular attention has been given to changes on deltaic coasts, especially in northern Java (e.g, Hollerwoger 1964), but there is very little information on rates of recession of cliffed coasts. Measurements are generally reported in terms of linear advance or retreat at selected localities, either over stated periods of time or as annual averages, but these can be misleading because of lateral variations along the coast and because of fluctuations in the extent of change from year to year.
Our preference is for areal measurements of land gained or lost or, better still, sequential maps showing the patterns of coastal change over specified periods. We have collected and collated sequential maps of selected sites and brought them up-to-date where possible.
Coastal changes can be measured with reference to the alignments of earlier shoreline features, such as beach ridges or old cliff lines stranded inland behind coastal plains. In Sumatra, beach ridges are found up to 150 kilometres inland. The longest time scale of practical value is the past 6,000 years, the period since the Holocene marine transgression brought the sea up to its present level. Radiocarbon dating can establish the age of shoreline features that developed within this period, and changes during the past few centuries can be traced from historical evidence on maps and nautical charts of various dates.
These have become increasingly reliable over the past century, and can be supplemented by outlines shown on air photographs taken at various times since 1940. Some sectors have shown a consistent advance, and others a consistent retreat; some have alternated. A shoreline sector should only be termed "advancing" if there is evidence of continuing gains by deposition and/or emergence, and "retreating" if erosion and/or submergence are still demonstrably in progress (Fig. 4).
Coastal changes may be natural, or they may be due, at least in part, to the direct or indirect effects of Man's activities in the coastal zone and in the hinterland. Direct effects include the building of sea walls, groynes, and breakwaters, the advancement of the shoreline artificially by land reclamation, and the removal of beach material or coral from the coastline. Indirect effects include changes in water and sediment yield from river systems following the clearance of vegetation or a modification of land use within the catchments, or the construction of dams to impound reservoirs that intercept some of the sediment flow. There are many examples of such man-induced changes on the coasts of Indonesia.
Reference will also be made to ecological changes that accompany gains or losses of coastal terrain, and to some associated features that result from man's responses to changes in the coastal environment.
Incidental references to some of the coastal features of Sumatra were included in Verstappen's (1973) geomorphological reconnaissance, but there has been no systematic study of this coastline. Verstappen's geomorphological map (1:2,500,000) gives only a generalized portrayal of coastal features: it does not distinguish cliffed and steep coasts, the extent of modern beaches, fringing reefs, or mangrove areas, but it does indicate several sectors where Holocene beach ridge plains occur.
Sumatra is 1,650 kilometres long and up to 350 kilometres wide, with an anticlinal mountain chain and associated volcanoes bordered to the east by a broad depositional lowland with extensive swamp areas along the Straits of Malacca. Off the west coast the Mentawai Islands constitute a "non-volcanic arc," consisting of uplifted and tilted Tertiary formations, their outer shores being generally cliffed -facing the predominant south-westerly swell transmitted across the Indonesian Ocean-while the inner shores are typically lower and more indented, with embayments fringed by mangroves. There are emerged coral reefs and beach ridges, especially on the outer shores, and the possibility of continued tilting is supported by the disappearance of islets off the coast of Simalur even within the present century (according to Craandijk 1908: quoted by Verstappen 1973). There are, however, contrasts between the islands, the relatively high island of Nias (summit 886 metres) being encircled by emerged reef terraces suggestive of uplift with an absence of tilting, while Enggano is tabular, steep-sided, and reef-fringed. Much more detailed work is needed to establish the evolution of these island coasts, and the effects of recurrent earthquakes and tsunami. At this stage, no information is available on rates and patterns of shoreline changes taking place here.
The south-west coast of mainland Sumatra is partly steep along the fringes of mountainous spurs, and partly low-lying, consisting of depositional coastal plains. Swell from the Indonesian Ocean is interrupted by the Mentawai Islands and arrives on the mainland coast in attenuated form. It is stronger to the north of Calang, where there are surf beaches bordering the blunted delta of the Tuenom River, and south-east of Seblat, where there are steep promontories between gently curving sandy shorelines backed by beach ridges and low dunes, interrupted by such blunted deltas as the Mana, the Seblat, and the Ketuan.
Coral reefs are rare along the central part of the south-west coast of Sumatra because of the large sediment yield from rivers draining the high hinterland, but to the south there are reef-fringed rocky promontories. Pleistocene and Holocene raised beaches and emerged coral reefs are also extensive, especially on headlands near Krui and Bengkulu, where reefs raised 30 metres above the present sea level have been truncated by the recession of steep cliffs. Farther south the coast shows the effects of vulcanicity on the slopes of Rajabasa. The Krakatau explosion of 1883 generated a tsunami that swept large coral boulders onshore and produced a fallout of volcanic ash that blanketed coastal features and augmented shore deposits. Near Cape Cina the steep coasts of Semangka Bay and Tabuan Island are related to en echelon fault scarps that run north-west to south-east, and the termination of the coastal plain near Bengkulu may also result from tectonic displacement transverse to this coastline. Farther north, the Indrapura River turns parallel to the coast to follow a swale behind beach ridges before finding an eventual outlet to the sea with the Batang River.
Padang is built on beach ridges at the southern end of a coastal plain that stretches to beyond Pariaman. The extensive shoreline progradation that occurred here in the past has evidently come to an end, for there are sectors of rapid shoreline erosion in Padang Bay, where groynes and sea walls have been built in an attempt to conserve the dwindling beach. North of Pariaman the cliffed coast intersects the tuffs deposited from the Manindjau volcano, and farther north there is another broad swampy coastal plain, with associated beach ridges built by wave action reworking fluvially supplied sediment derived from the andesite cones, Ophir and Malintang, in the hinterland. Towards Sirbangis this plain is interrupted by reef-fringed headlands of andesite on the margins of a dissected Pleistocene volcano. Beach erosion has become prevalent in the intervening embayments between here and Natal, and Verstappen (1973) suggested that the swampy nature of the coastal plain here could be due to recent subsidence, which might also explain the present recession of the coast. Broader beach ridge plains occur farther north, interrupted by Tapanuli Bay, which runs back to the steep hinterland at Sibolga. Musala Island, offshore, is another dissected volcano. Next comes the broad lowland on either side of the swampy delta of the Simpan Kanang, in the lee of Banyak Island, and beyond this the coast is dominated by sandy surf beaches, backed in some sectors by dune topography, especially in the long, low sector that extends past Meulaboh.
At the northern end of Sumatra the mountain ranges break up into steep islands with narrow straits scoured by strong tidal currents. Weh Island is of old volcanic rocks, terraced and tilted, with emerged coral reefs up to 100 metres above sea level. Uplifted reefs are also seen on some of the promontories of the northern Sumatran mainland. At Kutaraja the Aceh River has filled an intermontane trough, but the deltaic shoreline has been smoothed by waves from the north-west, coming down the Bengalem Passage between Weh and Peunasu islands, so that the mouths of distributary channels have been deflected behind sand spits and small barrier islands. Beach ridges built of fluvially supplied sediment form intersecting sequences near Cape Intem, where successive depositional plains have been built and then truncated, and there is an eastward drift of beach material along the coast towards Lhokseumawe.
Within this sector Verstappen (1964a) examined the coastal plain near the mouth of the Peusangan River. He concluded that a delta had been built out north of Bireuen, only to be eroded after the Peusangan was diverted by river capture 8 kilometres to the south (Fig. 5). Following this capture, the enlarged river has built a new delta to the east. Patterns of truncated beach ridges on the coastal plain commemorate the shorelines of the earlier delta, which also retains traces of abandoned distributary channels and levees on either side of a residual creek, the Djuli. At the point of capture the Peusangan valley has since been incised about 20 metres, but the old delta was clearly built with the sea at its present level, and so piracy must have taken place within the past 6,000 years, after the Holocene marine transgression had brought the sea up to this level. The new delta has developed in two stages (A, B in Fig. 5), the first indicated by converging beach ridges on either side of an abandoned river channel, the second farther east, around the present mouth. Dating of these beach ridges could establish rates of coastal advance and retreat in this area, and show when the river piracy took place.
South-east from Cape Diamant the low-lying swampy shores of the Straits of Malacca have sectors of narrow sandy beach interspersed with mudflats backed by mangroves, which also fringe the tidal creek systems to the rear. As the Straits narrow the tide ranges increase, and river mouths become larger, funnel-shaped estuaries bordered by extensive swamps instead of true deltas. The widest estuary is that of the Kampar River, where the tide range is sufficient to generate tidal bores that move rapidly upstream. The river channels are fringed by natural levees, and patterns of abandoned levees may be traced throughout the swamps. Locally there has been tectonic subsidence- marked by the formation of lakes amid the swamps-as on either side of the Siak Kecil River and south of the meandering Rokan estuary where lakes which formed along an abandoned river channel as it was enlarged by subsidence are now shrinking as the result of swamp encroachment.
FIG. 5 Changes near the mouth of the Peusangan River, northern Sumatra, following its diversion by capture. Beach-ridge patterns indicate the trend of an old delta, now eroded, north of Bireuen and two stages in development of a new delta to the east: at A a lobe that has been truncated by erosion, and at B a developing modern delta (based on Verstappen 1973)
In the narrower part of the Straits of Malacca there are elongated shoal and channel systems, and some of the shoals have developed into swampy islands, as on either side of the broad estuary of the Mampar. Verstappen (1973) suggested that the Bagansiapiapi Peninsula and the islands of Rupat, Bengkalis, and Tebingtinggi may be due to recent tectonic uplift, and the Rupat and Pajang Straits to alignments of corridor subsidence. The islands have extensive swamps, but their northern and western coasts are fringed by beach ridges possibly derived from sandy material on the sea floor during the shallowing that accompanied emergence. Farther south the Indragiri and Batanghari estuaries traverse broad swamp lands, in which they have deposited large quantities of sediment derived from the erosion of tuffs from volcanoes in their headwater regions. These very broad swamp areas have developed in Holocene times with the sea at, or close to, its present level. The rapidity of their progradation may be related to several factors: an abundance of fluvial sediment yield derived from the high hinterland by runoff under perenially warm and wet conditons; the luxuriance of swamp vegetation which has spread rapidly forward to stabilize accreting sediment, and has also generated the extensive associated peat deposits; and the presence of a broad, shallow, shelf sea, on which progradation may have been aided by tectonic uplift.
In eastern Sumatra, progradation appears to have been very rapid within historical times, but there is not yet sufficient information to permit detailed reconstruction and dating of the shoreline sequences. Studies of early maps, the accuracy of which is uncertain, and interpretations of descriptions by Chinese, Arab, and European travellers led Obdeijn (1941) to suggest that there had been progradation of up to 125 kilometres on the Kuantan delta since about 1600 AD. In further papers, Obdeijn (1942a, 1942b, 1943, 1944) found supporting evidence for extensive shoreline progradation along the Straits of Malacca and in southern Sumatra. In the fifteenth century Palembang, Djambi, and Indragiri were ports close to the open sea or a short distance up estuarine inlets (Van Bemmelen 1949). More recently, the shoreline of the Djambi delta prograded up to 7.5 kilometres between 1821 and 1922, while on the east coast the fishing harbour of Bagansiapiapi has silted up, and the old Sri Vijayan ports are now stranded well inland (Verstappen 1960, 1964b).
Witkamp (1920) described hillocks up to 4 metres high occupied by kitchen middens containing marine shell debris and now located over 10 kilometres inland near Serdang, but these have not been dated. Tjia et al. (1968) quoted various reports of beach ridges up to 150 kilometres at Air Melik and Indragiri, which were former shorelines, but such features are sparser on these swampy lowlands than on the deltaic plains of northern Java. Commenting on the rarity of beach ridges, Verstappen (1973) suggested that the sandy loads of the rivers are largely deposited upstream, so that only finer sediment reaches the coast to be deposited in the advancing swamp lands. Some beach ridges were derived from sediment eroded from the margins of drier "red soil"-ta/ang-particularly around former islands now encircled by swamps, as in the Mesuji district. If progradation has been aided by emergence one would expect beach ridges to be preserved as surface features, for where progradation has been accompanied by subsidence (as on most large deltas) the older beach ridges are found buried as sand lenses within the inner delta stratigraphy. The Holocene evolution of the lowlands of eastern Sumatra still requires more detailed investigation, using stratigraphic as well as geomorphological evidence.
Patterns of active erosion and deposition alongside the estuaries north of Palembang have been mapped by Chambers and Sobur (1975). The changes are due partly to estuarine meandering, with undercutting of the outer banks on meander curves as the inner banks are built up. Towards the sea there has been swamp encroachment, for example along the Musi-Banjuasin estuary, which is bordered by low natural levees breached by orthogonal tributary creeks. The shoreline on the peninsula north of Sungsang is advancing seawards, and there is active progradation along much of the southern coast of Bangka Strait.
Bangka Island rises to a steep-sided plateau with a granite interior: like the Riau and Lingga islands to the north it is geologically a part of the Malaysian Peninsula. Pleistocene terraces occur up to 30 metres above present sea level on Bangka, and its northern and eastern shores have coralfringed promontories and bays backed by sandy beach ridges, but the southern shores, bordering Bangka Strait, are low and swampy, with mangrove-fringed channels opening on to shoaly seas. Belitung is morphologically similar, but has more exposed coasts, with sandy beach-ridge plains extensive south of Manggar on the east coast, facing the south-easterly waves from the Java Sea. Both islands have tin-bearing alluvial deposits in river valleys and out beneath the sea floor, where such valleys extended and incised during glacial low sea-level phases and were submerged and infliled as the sea subsequently rose.
South of Bangka the east-facing coast of Sumatra consists of beach ridges backed by swamps and traversed by estuaries. Lobate salients such as Cape Menjangan and Cape Serdang are beach-fringed swamps rather than deltas, but beach ridges curve inland behind swamps on either side of the Tulangbawang River, where progradation has filled an estuarine gulf. At Telukbetung the lowlands come to an end as mountain ranges intersect the coast in steep promontories bordering Sunda Strait.
In 1883 the explosion of Krakatau, an island volcano in Sunda Strait (Fig. 6), led to the ejection of about 18 cubic kilometres of pumice and ash, leaving behind a collapsed caldera of irregular outline, up to more than 300 metres deep and about 7 kilometres in diameter (Fig. 7).
FIG. 6 Krakatau, an island volcano in Sunda Strait which exploded in 1883, leaving three residual islands around a deeper submerged crater, within which a new volcano, Anak Krakatau, has formed
FIG. 7 Krakatau and adjacent areas before and immediately after the explosive eruption in August 1883
The collapse caused a tsunami up to 30 metres high on the shores of Sunda Strait and surges of lesser amplitude around much of Java and Sumatra (Verbeek 1886). Marine erosion has cut back the cliffs produced by the explosive eruption: at Black Point on Pulau Krakatau-Ketjil, cliffs cut in pumice deposited during the 1883 eruption had receded up to 1.5 kilometres by 1928 (Umbgrove 1947). Since 1927 a new volcanic island, Anak Krakatau, has been growing in the centre of the caldera, with phases of rapid enlargement and outward progradation in the 1940s and early 1960s (Zen 1969).
Sunda Strait is bordered by volcanoes, the coast consisting of high volcanic slopes, with sectors of coral reef, some of which have developed rapidly in the century since the Krakatau explosion destroyed or displaced their predecessors. Panaitan Island consists of strongly folded Tertiary sediments, with associated volcanic rocks, and has a sandy depositional fringe around much of its shoreline. Similar rocks form the higher western part (Mount Payung) of the peninsula of Ujong Kulon, the rest consisting of a plateau of Mio-Pliocene sedimentary rocks. This peninsula is a former island, attached to the mainland of Java by a depositional isthmus (Verstappen 1956). It is cliffed on its south-western shores, but the southern coast has beaches backed by parallel dune ridges up to 10 metres high, covered by dense Pandanus scrub, the beach curving out to attach a coral island at Tereleng as a tombolo. The northwest coast has cliffs up to 20 metres high, passing into bluffs behind a coral reef that lines the shore past Cape Alang and into Welkomst Bay. Volcanic ash and negro heads on and behind this reef date from the Krakatau explosion, when a tsunami washed over this coast. Verstappen (1956) found that notches up to 35 centimetres deep had been excavated by solution processes and surf swash on the coral boulders thrown up onto this shore in 1883. This is rapid compared with solution notching measured by Hodgkin (1970) at about 1 millimetre/year on tropical limestone coasts. Within Welkomst Bay there are mangrove sectors, prograding rapidly on the coast in the lee of the Handeuleum reef islands. The geomorphological features of Sunda Strait deserve closer investigation, with particular reference to forms that were initiated by catastrophic events almost a century ago (cf. Symons 1888).
An island about 1,000 kilometres long and up to 250 kilometres wide, Java is threaded by a mountain range which includes several active volcanoes. To the north are broad deltaic plains on the shores of the Java Sea; to the south steeper coasts, interrupted by sectors of depositional lowland, face ocean waters.
FIG 8. The coastal outline of noth-western Java as shown on 1883 - 1885 topographic maps (above) and on 1976.
The west coast of Java is generally steep, except for the Bay of Pulau Liwungan, where the Ciliman River enters by way of a beach-ridge plain. Near Merak the coast is dominated by the steep slopes of the Karang volcano which descend to beachfringed shores. Panjang and Tunda islands, offshore, are of Miocene limestone, but the shores of Banten Bay are lowlying and swampy, with some beach ridges, widening to a deltaic plain of the Ciujung River. This marks the beginning of the extensive delta coastline built by the silt-laden rivers of northern Java. There are protruding lobes of deposition around river mouths and intervening sectors of erosion, especially where a natural or artificial diversion of the river has abandoned earlier deltaic lobes, or sediment yield has been reduced by dam construction. A patchy mangrove fringe persists although there has been widespread removal of mangroves, in the course of constructing tambak (brackishwater fishponds), and in places these are being eroded. Some sectors are beach-fringed and the prevalence of northeasterly wave action generates a westward drifting of shore sediment. Fig.8 shows the pattern of change on the north coast of West Java detected from comparisons of maps, drawn between 1883 and 1885, and 1976 Landsat imagery: there has been seaward growth of land in the vicinity of river mouths, and smoothing and recession of the shoreline in intervening sectors.
There was rapid progradation of the Ciujung delta after the diversion of its lower course for irrigation and flood-control purposes. Growth of the new delta led to the joining of Dua, a former island, to the Javanese mainland, and this has raised problems of wildlife management, for the island had been declared a bird sanctuary in 1973, before it became so readily accessible from the land. Immediately to the west there have been similar changes on the Cidurian delta since 1927, when an irrigation canal was cut, and a new outlet established 4.5 kilometres west of the old natural river mouth. Comparison of outlines on air photographs showed that over an 18-year period the new delta built up to 2.5 kilometres seawards at the mouth of the artificial outlet, while the old delta lobe to the east was cut back by wave action which removed the mangrove fringe and eroded fishponds to the rear (Verstappen 1953a).
Changes have also taken place on the large and complex delta built by the Cisadane River. Natural breaching of levees by floodwaters led to the development of a new outlet channel, and when delta growth began at the new outlet the delta preciously built around the old river mouth began to erode, the irregular deltaic shoreline being smoothed as it was cut back (Verstappen 1953a).
Numerous coral reefs and coralline islands (the Thousand Islands) lie off Jakarta Bay, and many of these have shown changes in configuration during the past century. As a sequel to the studies by Umbgrove (1928,1929a,1929b). Zaneveld and Verstappen (1952) traced changes with reference to maps made in 1975,1927, and 1950.
Haarlem have grown larger as the result of accretion on sand cays and shingle ramparts, but there are also sectors where there has been erosion or lateral displacement of such features on island shorelines. In general the shingle ramparts have developed around the northern and eastern margins, exposed to relatively strong wave action, while the sand cays lie to the south-west, in more sheltered positions. Verstappen (1954) found changes in the position of shingle ramparts before and after 1926, on these islands, which he related to climatic variations. In the years 1917-1926 easterly winds predominated, with the ITC in a relatively northerly position because the Asian anticyclone was weak, and wave action built ramparts on the northern and eastern shores; after 1926 westerly winds became dominant, with the ITC farther south because of stronger Asian anticylonicity, and waves built new ramparts of shingle on the western shores (Verstappen 1968).
There is evidence of subsidence on some of the coral islands, such as Pulan Pugak, where nineteenth-century bench-marks have now sunk beneath the sea, while others have emerged: Alkmaar Island, for example, has a reef above sea level undergoing dissection. Some of the islands have been modified by the quarrying of coral limestone for use in road-making and buildings in Jakarta. This quarrying augmented the supply of gravel to shingle ramparts, but several islands that were quarried have subsequently been reduced by erosion: Umbgrove (1947) quoted the example of Schiedam, a large low-wooded island on a 1753 chart, reduced to a small sand cay by the 1930s.
The features of Jakarta Bay were described in a detailed study by Verstappen (1953a). The shores are low-lying, consisting of deltaic plains with a mangrove fringe inter rupted by river mouths and some sectors of sandy beach. Between 1869 and 1874 and 1936 and 1940 as much as 26 square kilometres of land was added to the bay shores by deltaic progradation, mainly on the eastern shores (Fig. 9). Detailed comparisons of maps made between 1625 and 1977 show the pattern of advance at Sunda Kelapa, Jakarta (Fig. 10). Inland, patterns of beach ridges mark earlier alignments of the coast during its irregular progradation, the variability of which has been related to fluctuations in the position of river mouths delivering sediment (Fig. 11). The beach ridges diverge from an old cuspate foreland at Tanjung Priok, across the deltaic plains of the Bekasi-Cikarang and Citarum rivers to the east.
FIG. 9 The extent of accretion and abrasion on the shores of Jakarta Bay between the periods 1869-1874 and 1936-1940 (based on Verstappen 1953a)
FIG. 10 The pattern of coastal advance at Sunda Kelapa, Jakarta, between 1625 and 1977
Pardjaman (1977) published a map based on a comparison of nautical charts made in 1951 and 1976 which showed substantial accretion along the eastern shores of the bay, especially alongside the mouths of the Bekasi and Citarum rivers. This was accompanied by shallowing off the mouths of these rivers. Along the southern shores at Jakarta a fringe of new land a kilometre wide has been created artificially for recreational use by reclaiming the mangrove zone and adjacent mudflats. On the other hand, removal of lorry-loads of sand from Cilincing Beach resulted in accelerated shoreline erosion. In the 65 years between 1873 and 1938 the shoreline retreated about 50 metres but in the 24 Years between 1951 and 1975, with sand extraction active, it went back a further 600 metres (Pardjaman 1977).
East of Jakarta the Citarum River drains an area of about 5,700 square kilometres, including mountainous uplands, plateau country, foothills, and a wide coastal plain, with beach ridges up to 12 kilometres inland. It has built a large delta (Fig. 12), which in recent decades has grown northwestwards at Tanjung Karawang, with subsidiary growth northwards and southwards at the mouths of the Bungin and Blacan distributaries. At present the river heads north from Karawang, and swings to the north-west at Pingsambo, but at an earlier stage it maintained a northward course to build a delta in the Sedari sector. This has since been cut back, leaving only a rounded salient, along the shores of which erosion is continuing (Plate 1). The shores are partly beach-fringed, the beaches showing the effects of westward longshore drifting, which builds spits that deflect creek mouths in that direction. Eroding patches of mangrove persist locally and, north of Sungaibuntu, there is erosion and dissection of fishponds (PIate 21.
FIG 11 The beach-ridge pattern in the hinterland to the south of Jakarta Bay. Data from Verstappen 1953a and the Geological Survey of Indonesia 1970
FIG. 12 The Citarum delta, showing former courses of the river and the pattern of beach ridges indicative of earlier shorelines (based on Verstappen 1953a)
According to Verstappen (1953a) the Citarum delta prograded by up to 3 kilometres between 1873 and 1938, although sectors of the eastern shore of Jakarta Bay retreated by up to 145 metres. After the completion of the Jatiluhur Dam upstream in 1970 a marked slackening of the rate of progradation of the deltaic shoreline was noted at the mouth of the Citarum. BY contrast, growth on the neighbouring Bekasi delta accelerated after 1970. It was decided that dam construction had diminished the rate of sediment flow down the Citarum River because of interception of silt in the impounded reservoir whereas the sediment yield from the undammed Bekasi River had increased. Such reduction of the rate of progradation has been widely recognized on many deltaic shorelines, following dam construction within their catchments, and the onset of delta shoreline erosion is a phenomenon that has also been documented widely around the world's coastlines (Bird 1976). There is little doubt that the rate and extent of delta shoreline progradation will diminish and that shoreline erosion will accelerate and become more extensive as further dams are built in the catchments of the rivers of northern Java. This will be accompanied by increasing penetration of brackish water into the river distributaries and the gradual spread of soil salinization into deltaic lands.
East of the Citarum delta are the extensive depositional plains built up by the Cipunegara River (Fig. 13). The Cipunegara has a catchment of about 1,450 square kilometres, with mountainous headwater regions, carrying relics of a natural deciduous rain forest and extensive tea plantations; a hilly central catchment with teak forest, rubber plantations, and cultivated land; and a broad coastal plain bearing irrigated ricefields. The river meanders across this plain, branching near Tegallurung, where the main stream runs northwards and a major distributary, the Pancer, flows to the north-east. An 1865 map shows the Cipunegara opening through a large lobate delta, the Pancer having a smaller delta to the east, but when topographical maps were made in 1939 the Pancer had developed two large delta lobes extending 3 to 4 kilometres out into the Java Sea while the Cipunegara delta had been truncated, with shoreline recession of up to 1.5 kilometres. Aerial photographs taken in 1946 showed further advance on the Pancer delta, and continued smoothing of the former delta lobe to the west (Hollerwoger 1964). Tjia et al. (1968) confirmed this sequence with reference to the pattern of beach ridges truncated on the eastern shores of Ciasem Bay and the 1976 Landsat pictures show that a new delta has been built out to the north-east (Fig. 14). Along the coast the mangrove fringe (mainly Rhizophora) has persisted on advancing sectors but elsewhere has been eroded or displaced by the construction of fishponds.
FIG. 13 Stages in the evolution of the Cipunegara delta since 1865 (including data from Hollerwoger 1964)
FIG. 14 The deltaic coastline east and west of the Cipunegara showing the pattern of beach ridges indicative of stages in shoreline evolution (based on Tjia et al. 1968)
Third in the sequence of major deltas east of Jakarta is that built by the Cimanuk River (Fig. 15). The Cimanuk and its tributaries drain a catchment of about 3,650 square kilometres, the headstreams rising on the slopes of the Priangan mountain and the Careme volcano, which carry rain forest and plantations. There has been extensive soil erosion in hilly areas of the central catchment following clearance of the forest and the introduction of grazing and cultivation, particularly in the area drained by the Cilutung tributary (Van Dijk and Vogelzang 1948). The Cimanuk thus carries massive loads of silty sediment down to the coast: of the order of 5 million tonnes a Year (Tjia et al. 1968). The broad coastal plain bears extensive rice-fields, with fishponds and some residual mangrove fringes along the shoreline to the north. The river meanders across this plain, the distributary Rambatan diverging north-westwards near Plumbon.
Hollerwoger (1964) traced changes on the delta shoreline with references to maps made in 1857, 1917, and 1935, and air photographs taken in 1946. Examination of beach-ridge patterns, marking successive shorelines, shows that before 1857 the Cimanuk took a more northerly course and built a delta lobe (Fig 16). BY 1857 this was in course of truncation, and the Cimanuk mouth had migrated westwards to initiate a new deltaic protrusion. Between 1857 and 1917 large delta lobes were built by the Cimanok and the Rambatan but an irrigation channel, the Anyar Canal, had been cut from Losarang to the coast, diminishing the flow to the Rambatan, and a new delta began to grow at the canal mouth, out into the embayment between the Cimanuk and Rambatan deltas. BY 1935 this embayment had been filled, the shoreline having advanced about 6 kilometres in 17 years, while erosion had cut back the adjacent Rambatan delta. Continued growth occurred at the mouth of the Anyar Canal and the Cimanuk between 1935 and 1946, by which time the Rambatan delta shoreline had retreated up to 300 metres.
During a major flood in 1947 the Cimanuk established a new course north-east of Indramayu, and a complex modern delta has since grown here (Plate 3). Stages in the evolution of this modern delta are shown in Fig. 17. At first there was only a single channel, but three main distributaries-the Pancer Balok, Pancer Payang, and Pancer Song-have developed as the result of levee crevassing, and each of these shows further bifurcations resulting from channel-mouth shoal formation, as well as the cutting of artificial lateral outlet channels (Tjia 1965; Hehanussa et al. 1975; Hehanussa and Hehuwat 1979). Since 1974 the Pancer Balok has replaced the Pancer Payang as the main outlet. Erosion has continued on the northern lobe where the present coastline shows an enlargement of tidal creeks, probably the result of compactionsubsidence.
On the east coast, south of Pancer Song, there has been erosion in recent decades. Sand drifting northwards has been intercepted by the oil terminal jetty at Balongan, and the shoreline north of the jetty is retreating rapidly. According to Purbohadiwidjojo (1964) Cape Ujung, to the south, was an ancient delta lobe, but there is no evidence that any channel led this way. Tjia (1965) suggested that it might be related to a buried reef structure, but there is no evidence of this either. In fact, the cuspate promontory is situated where one of the earlier beach ridges has been truncated by the present shoreline. Patterns on the 1976 Landsat picture suggest that the cape is at the point of convergence of two current systems in the adjacent sea area, but it is not clear whether the pattern is a cause or a consequence of present coastal configuration.
FIG. 15 Evolution of the Cimanuk delta between 1957 and 1974. The modern delta (Fig. 17) is northeast of Indramayu (based on Hehanussa et al. 1975).
FIG. 16 Earlier shorelines of the Cimanuk delta as indicated by beach-ridge alignments
FIG. 17 Stages in the growth of the modern Cimanuk delta between 1947 and 1976 (based on Hehanussa and Hehuwat 1979)
FIG.18 Growth of the Bangkaderes delta since 1853 (including data from Hollerwoger 1964)
FIG. 19 Growth of the Sanggarung and Bosok deltas since 1857 (including data from Hollerwoger 1964)
FIG. 20 Growth of the Pemali delta since 1865 (including data from Hollerwoger 1964)
Although it has only a relatively small catchment (250 square kilometres) the Bangkaderes has built a substantial delta (Fig. 18) on the coast south-east of Cirebon. This is because of its large annual sediment load, derived from a hilly catchment where severe soil erosion has followed forest clearance and the introduction of farming. An 1853 map showed a small lobate delta but by 1922 two distributary lobes had been built, advancing the shoreline by up to 2.7 kilometres. Air photographs taken in 1946 show continued enlargement of the eastern branch, extended by up to 1.8 kilometres seawards, and erosion of the western branch, which no longer carried outflow (Hollerwöger 1964).
A few kilometres to the east are the Sanggarung and Bosok deltas (Fig. 19). The Sanggarung has a catchment of 940 square kilometres, and rises on the slopes of the volcanic Mt. Careme. The headwater regions are steep and forested and partly farmed land, and the coastal plain consists largely of rice-fields, with fishponds to seaward and some mangrove fringes. An 1857 survey showed a delta built out northeastwards along the Bosok distributary, and between 1857 and 1946 deposition filled in the embayment to the east, on either side of the Sebrongan estuary, and there was minor growth on the Bosok delta; to the north west the Sanggarung built out a major deltaic feature, with several distributaries leading to cuspate outlets. The coastal lowland here has thus shown continuing progradation of a confluent delta plain without the alternations that occur as the result of natural or artificial diversion of river mouths (Hollerwoger 1964).
The Pemali delta (Fig. 20) also showed consistent growth between an 1865 survey, 1920 mapping, and 1946 air photography (Hollerwöger 1964). The river drains a catchment of about 1,200 square kilometres, with forested mountainous headwater regions and extensive hilly country behind the swampy coastal plain. The delta grew more rapidly between 1920 and 1946 than it had over the 56 years preceding the 1920 survey, possibly because of accelerated soil erosion in hilly country as the result of more intensive farming.
The growth of the Comal delta to the east has shown fluctuations (Fig. 21).When it was mapped in 1870 the Comal (catchment area of about 710 square kilometres) was building a lobate delta to the northwest but by 1920 growth along a more northerly distributary had taken place. The river then developed an outlet towards the north-east, leading to the growth of a new delta in this direction by the time air photographs were taken in 1946. The earlier lobes to the west had by then been truncated. In this, as in the other north Java deltas, growth accelerated after 1920, probably as a result of increasing soil erosion due to intensification of farming within the hilly hinterland (Hollerwöger 1964).
The Bodri delta (Fig. 22) is the next in sequence. The Bodri River rises on the slope of the Prahu volcano, and drains a catchment of 640 square kilometres. Again the mountainous headwater region backs a hilly area, with a depositional coastal plain, mainly under rice cultivation. An 1864 survey shows the Bodri opening to the sea through a broad lobate delta which had grown northwards to Tanjung Korowelang at the mouths of two distributaries when it was remapped in 1910. Thereafter a new course developed, probably as the result of canal-cutting to the north east, and by 1946, when air photographs were taken, a major new delta had formed here, prograding the shoreline by up to 4.2 kilometres. Meanwhile, the earlier delta at Tanjung Korowelang had been truncated and the shoreline smoothed by erosion (Hollerwoger 1964).
East of Semarang large-scale progradation is thought to have taken place in recent centuries. Demak, a sixteenth-century coastal port, is now about 12.5 kilometres inland behind a prograded deltaic shoreline. Continuing progradation is indicated by the small delta growing at the mouth of a canal cut from the River Anyar to the sea, but otherwise the coastline north to Jepara is almost straight at the fringe of a broad depositional plain. According to Niermeyer (1913: quoted by Van Bemmelen 1949) the Muria volcano north-east of Demak was still an island in the eighteenth century, when seagoing vessels sailed through the strait that separated it from the Remang Hills, a strait now occupied by marshy alluvium. This inference, however, needs to be checked by geomorphological and stratigraphical investigations.
FIG. 21 Growth of the Comal delta since 1870 (including data from Hollerwoger 1964)
FIG. 22 Growth of the Bodri delta since 1864 (including data from Hollerwöger 1964)
FIG. 23 Shoreline changes south of Jepara between 1911 and 1972, showing the evolution of the delta at the mouth of Wulan canal
The shoreline of the Serang delta, south of Jepara, changed after the construction of the Wulan Canal in 1892, which diverted the sediment yield from the Kedung River to a new outlet, around which a substantial new delta has been formed. In 1911 this was of cuspate form, but by 1944 it was elongated, and by 1972 it had extended in a curved outline northwards, branching into three distributaries (Fig. 23). Between 1911 and 1944 the new delta gained 297 hectares, and from 1944 to 1972 a further 385 hectares, including beach-ridge systems and a seaward margin adapted for brackish-water fishponds.
Beyond Jepara the coast steepens on the flanks of Muria, but the shores are beach-fringed rather than cliffed. To the east the Juwana River opens on to the widening deltaic plain behind Rembang Bay, but at Awarawar the coast consists of bluffs cut in Pliocene limestone. Tuban has beaches and low dunes of quartzose sand, supplied by rivers draining sandstones in the hinterland, but otherwise the beaches on northern Java are mainly of sediments derived from volcanic or marine sources. Hilly country continues eastwards until the protrusion of the Solo River delta.
The modern Solo delta (Fig. 24) has been built out rapidly from the coast at Pangkah since a new artificial outlet from this river was cut at the beginning of the present century (Verstappen 1977). Comparisons of the outlines of the Solo delta shown on 1 :50,000 topographical maps made in 1915 and 1936 and on air photographs taken in 1943 and 1970 indicated #award growth of 3,600 metres in 1915 to 1936, a further 800 metres between 1936 and 1943, and 3,100 metres between 1943 and 1970; in real terms the delta increased by 8 square kilometres in the first period, 1 square kilometre in the second, and a further 4 square kilometres in the third (Verstappen 1964a, 1977). The rate of progradation of such a delta depends partly on the configuration of the sea floor, for as the water deepens offshore a greater volume of sediment is required to produce the same increase in surface area. It also depends on the rate of fluvial sediment yield, which has here increased following deforestation and intensified land use within the catchment, so that larger quantities of silt and clay have been derived from the intensely weathered volcanic and marry outcrops in the hinterland: the average suspended sediment load is 2.75 kilograms per cubic metre. Much of the silt has been deposited to form levees, while the finer sediment accumulates in bordering swamps.
The features of this delta include a relatively smooth eastern shoreline backed by parallel beach ridges and fronted by sand bars, the outlines determined by northeasterly wave action during the winter months. As this is also the dry season, there has been a tendency for distributaries and creeks formed on the eastern side of the Solo to be blocked off by wave deposition and silted up, the outcome being that the channels opening north-westwards have persisted to carry the bulk of the discharge and sediment yield from the Solo in the wet season, so that the delta has grown more rapidly in this direction. Mangroves (mainly Rhizophora spp.) are patchy and eroded on the eastern shore, but broad and spreading seawards between the distributary mouths on the more sheltered western shore. The tide range is small (less than 1 metre), but at low tide the mudflats exposed on the western shores are up to 200 metres wide. The rapid growth of such a long, narrow delta, protruding more than 20 kilometres seawards, is related partly to the shallowness of the adjacent sea and the consequent low-wave energy conditions and partly to the predominance of clay in the deltaic sediment, which is sufficiently cohesive to form persistent natural levees projecting out into the Java Sea.
FIG. 24 The evolution of the Solo delta between 1915 and 1970 (from Verstappen 1977)
Between 1915 and 1936 there was some lateral migration of the Solo River, marked by undercutting of banks on the outer curves of meanders, and a new outlet channel (3 in Fig. 24) was initiated, probably as the result of flood overflow and levee crevassing on the meander curve. A small delta formed here, but by 1970 it had been largely eroded leaving only a minor protuberance on an otherwise smoothly prograded eastern coast. The effects of canal construction are well illustrated where a channel, cut between 1936 and 1943 from a distributary (2 in Fig. 24) to irrigate rice-fields, increased drainage into an adjacent creek (5 in Fig 24) which then developed levees that grew out seawards. However, by 1970 this, too, had been cut back. A similar development farther south (4 in Fig. 24) converted a creek into a minor distributary of the Solo, with its own sub-delta lobe by 1943, but progradation of mangrove swamps (largely replaced by fishponds) has proceeded rapidly on this part of the western coastline, and by 1970 the distributary, although lengthened, protruded only slightly seawards. In the course of its growth, the Solo delta has incorporated the former island of Mangari, which consists of Pliocene limestone (Verstappen 1977).
East of the broad funnel-shaped entrance to Surabaya Strait the Bangkalan coast of north-west Madura Island shows several small mangrove-fringed deltas on a muddy shoreline. The north coast of the island of Madura is remarkably straight, with terraces that show intermittent emergence as the result of tectonic uplift. The hinterland is steep, with areas of Pliocene limestone, but the shore is generally beachfringed, with some minor dunes to the east. The southern coast of the island is depositional, with beaches of grey volcanic sand that culminate in a recurved spit at Padelegan. Coastal waters are muddy, but outlying islands, such as Kambing, have fringing reefs and derived beaches of pale coralline sand. Tide range increases westwards, and the Baliga River enters the sea by way of a broad, mangrovefringed tidal estuary, bordered by swampy terrain, with a narrow beach to the west.
Surabaya Strait shows tidal mudflats, scoured channels, and estuarine inlets indicative of relatively strong current action, and there has been extensive reclamation for fishponds along the mangrove-fringed coast to the south. In the fourteenth century ships could reach Mojokerto, now 50 kilometres inland on the Brantas delta, which continues to prograde around its distributary mouths. The southern shores of Madura Strait are beach-fringed, the hinterland rising steeply to the volcanoes of Bromo and Argapura. Beach sediments are grey near the mouth of rivers draining the volcanic hinterland, pale or cream near fringing coral reefs, and white in the Jangkar sector, where quartzose sands are found.
The east coast of Java is steep, with streams radiating from the Ijen volcano, but to the south a coastal plain develops and broadens. This consists of low beach ridges built mainly of volcanic materials derived from the Ringgit upland. The Sampean delta is fan-shaped, accreting on its western shores as erosion cuts back the eastern margin. The Blambangan Peninsula is of Miocene limestone, and has extensive fringing reefs backed by coralline beaches, with evidence of longshore drifting on the northern side, into the Straits of Bali.
The south coast of Java is dominated by wave action from the Indonesia Ocean, and receives a relatively gentle southwesterly swell of distant origin and stronger locally generated south-easterly waves that move shore sediments and deflect river outlets westwards, especially in the dry winter season. It is quite different from the north coast of Java, being dominated by steep and cliffed sectors and long, sandy beaches rather than protruding deltas. There is very little information on the extent of shoreline changes in historical times, and we cannot accept the statement of Tjia et al. (1968, p. 26) that abrasion rates along the south coast must have been much higher than those on the deltaic northern shoreline because of the more powerful wave action from the Indonesian Ocean: changes on this rocky and sandy coast will have been relatively slow.
The Bay of Grajagan is backed by a sandy barrier enclosing a river-fed estuarine lagoon system with an outlet to the sea at the western end, alongside the old volcanic promontory of Capil. Farther west the coast becomes indented, with cliffed headlands of Miocene sedimentary rock and irregular embayments, some with beaches and beach ridges around river mouths. Nusa barung is a large island of Miocene limestone with a karstic topography and a cliffed and isletted southern coast; its outlines are related to joint patterns and southward-tilting (Tjia 1962). It modifies oceanic wave patterns on the sandy shores of the broad embayment to the north in such a way as to generate longshore drifting from west to east so that the Bondoyudo River has been deflected several kilometres eastwards to an outlet behind a barrier spit leading to a cuspate foreland with multiple beach ridges (Fig. 25).
FIG. 25 Longshore drifting and the evolution of a cuspate foreland in the lee of Nusa barung, a limestone island off the south coast of Java
The coastal plain then narrows westwards and gives place to a steep indented coast on Miocene sedimentary formations, including the limestones of Kendeng, with bolder promontories of andesite near Tasikmadu. At Puger and Meleman there are beach-ridge systems surmounted by dunes up to 15 metres high, with a thick vegetation cover, in sequence parallel to the shoreline. These interrupt the predominantly karstic limestone coast (Plates 4, 5, and 6), with cliffed sectors and some fringing reefs, that continues westwards to Parangtritis. Near Baron the limestone cliffs are fronted by shore platforms exposed at low tide and flat-floored notches, cut in the base of cliffs and stacks, testify to the importance of solution processes in the shaping of these features (Plate 4). Locally, beaches of calcareous sand and gravel occupy coves, and where these occur an abrasion ramp may be seen at the rear of the shore platform. At Baron a river issues from the base of a cliff and meanders across a beach of black sand that has evidently been washed into the valley-mouth inlet by ocean waves (Plate 5), the sand having come from sea-floor deposits supplied by other rivers draining the volcanic hinterland.
At Parangtritis the cliffs end, and the broad depositional plain of central south Java begins. The Opak and Progo rivers, draining the southern slopes of the Merapi volcano, are heavily laden with grey sands and gravel derived from pyroclastic materials. During floods these are carried into the sea to be reworked by wave action and built into beaches with a westward drift (Plates 7 and 8). The coastal plain has prograded, with the formation of several beach ridges separated by swampy swales. No measurements of historical changes are available, but our reconnaissance in November 1979 found evidence of sequences of localized progradation at the river mouths followed by westward distribution of part of the prograded material. It appears that the alignment of the shore is being maintained, or even advanced seawards, as the result of successive increments of fluvial sand supply. Finer sediment, silt and clay, is deposited in bordering marshes and swales, or carried into the sea and dispersed by strong wave action.
On some sectors, especially near Parangtritis, the beach is backed by dune topography, typically in the form of ridges parallel to the shoreline and bearing a sparse scrub cover (Plate 9). At Parangtritis there are mobile dunes up to 30 metres high, driven inland by the south-easterly winds (Plate 10). The presence of mobile dunes, unusual in this humid tropical environment, may be due to a reduction of their former vegetation cover by sheep and goat grazing, and by the harvesting of firewood (Verstappen 1957).
Whereas the present beach and dune systems consist of incoherent grey sand, readily mobilized by wind action in unvegetated areas, the older beach-ridge systems farther inland are of more coherent silty sand which can be used for dry-land cultivation. The silt fraction may be derived from airborne (e.g., volcanic dust) or flood-borne accessions of fine sediment, or it may be the outcome of in situ weathering of some of the minerals in the originally incoherent sand deposits.
At Karangtawang the depositional lowland is interrupted by a high rocky promontory of andesite and limestone, the Karangboto Peninsula. There are extensive sand shoals off the estuary of the Centang River, which washes the margins of the rocky upland, and there appears to have been rapid progradation of the beach to the east-and also in the bay to the west-where sand has built up in front of a former sea cave which used to be accessible only by means of ropes and ladders when men descended the cliff to collect birds' nests. Rapid accretion may have been stimulated here by the catastrophic discharge of water and sediment that followed the collapse of the Sempor Dam in the hinterland in 1966.
The sandy and swampy coastal plain resumes to the west of the Karangboto Peninsula, and extends past the mouth of the Serayu River. In this sector it has been disturbed by the extraction of magnetite and titanium oxide sands; in places, the beach ridges have been changed into irregular drifting dunes, while dredged areas persist as shallow lagoons.
On either side of the mouth of the Serayu River the coastal plain has prograded by the addition of successive sandy beach ridges separated by marshy swales. The sediments are of fluvial origin, reworked and emplaced by wave action, and progradation has enclosed a former island as a sandstone hill among the beach ridges. According to Zuidam et al. (1977) the coastal plain shows a landward slope at a number of places where the streamlets flow land-wards instead of seawards, and this is presumed to be due to very recent differential tectonic movements.
The geomorphological contrast between the irregular deltaic coast of northern Java and the smooth outlines of depositional sectors on the south Java coast is largely due to contrasts in wave-energy regimes and sea-floor topography. The sediment loads of rivers flowing northwards and southwards from the mountainous watershed are similar, but the finer silt and clay, deposited to form deltas in the low-energy environments of the north coast, are dispersed by high-wave energy on the south coast. The coarser sand fraction seen in beach ridges associated with the north coast deltas is thus concentrated in more substantial beach and dune formations on the south coast. The contrast is emphasized by the shallowness of coastal waters off the north coast, which reduces wave energy, as opposed to the more steeply shelving sea floor off the south coast, which allows larger waves to move into the shoreline. Nevertheless, silt and clay carried in floodwaters settles in the swales between successively built beach-ridge systems along the southern coast, and in such embayments as Segara Anakan, and, as we have noted, it may also have been added to the sandy deposits of older beach ridges inland.
The nature and rate of sediment yield from rivers draining to the south coast vary with the size and steepness of the catchment, with geological features such as catchment Ethology, and with vegetation cover. In the Serayu River basin, deforestation has accelerated sediment yield and increased the incidence of flooding in recent years. Meijerink (1977) found that the annual sediment yield from catchments dominated by sedimentary rocks was ten times that of catchments with similar vegetation and land use on volcanic formations, the contrast being reflected in the nature and scale of depositional features developed at the river mouth.
West of the Serayu River the sandy shoreline, backed by beach ridges, curves southwards to Cilacap, in the lee of Nusakambangan, a high ridge of limestone and conglomerate, with precipitous cliffs along its southern coastline. Extensive mangrove swamps threaded by channels and tidal creeks border the shallow estuarine embayment of Segara Anakan (Fig. 26), which receives large quantities of silty sediment from the Citanduy River. At the eastern end, strong tidal currents maintain a navigable inlet for the port of Cilacap, which stands on the sandy barrier behind a shoaly bay. A meandering channel persists westwards, leading through the mangroves to Segara Anakan, which has a larger outlet through a steep-sided strait to Penandjung Bay. Changes in the configuration of Segara Anakan between 1900 and 1964 were traced by Hadisumarno (1964), who found evidence for rapid advance of mangroves into the accreting intertidal zone. He reported surveys made in 1924, when the average depth (ignoring deeper tidal channels) was 0.5 to 0.6 metres, and 1961, when it had shallowed to 0.1 to 0.2 metres, the tidal channels having deepened. Mangrove advance is exceptionally rapid here, and much of the shallow lagoon is expected to disappear as mangroves encroach further in the next two decades. The features and dynamics of Segara Ankan are being studied in Phase II of the UN University LIPI Indonesian coastal resources management project in 1980-81.
FIG. 26 The rapidly silting estuarine embayment of Segara Anakan, shrinking in area as a result of mangrove encroachment, still has a tidal channel, Kali Kembangkuning, linking it to an eastern outlet at Cilacap
West of Segara Anakan the beach ridge plain curves out to the tombolo of Pangandaran, where deposition has tied an island of Miocene limestone (Panenjoan) to the Java mainland (Fig. 27), and continues on to Cijulang, where the hinterland again becomes hilly. Beaches line the shore, and many of the rivers have deflected and sand-barred mouths. At Genteng a beach-ridge plain develops, curving out to a tombolo that attaches a former coralline island, and beach ridges also thread the depositional lowlands around the mouths of the Ciletuh and Cimandiri rivers flowing into Pelabuhanratu Bay. The beach ridges indicate past progradation, but no information is available on historical trends of shoreline change in this region. West of Pelabuhanratu the coast steepens, but is still fringed by surf beaches, some sectors widening into depositional coastal plains with beach and dune ridges and swampy swales, including the isthmus which ties Ujong Kulon as a peninsula culminating in Java Head
The Indonesian coasts of Kalimantan have received very little attention from geomorphologists, and there is no information on rates of shoreline change in historical times. The western and southern coasts are extensively swampy, with mangroves along the fringes of estuaries, inlets and sheltered embayments. The hilly hinterland approaches the west coast north of Pontianak, where there are broad tidal inlets, and to the south depositional progradation has attached a number of former volcanic islands as headlands. The Pawan and the Kapuas rivers have both brought down sufficient sediment to build substantial deltas (Tjia 1963) but in general the shoreline consists of narrow intermittent sandy beaches backed by swamps, with cuspate salients in the lee of islands such as Gelam, or reefs as at Tanjung Sambar. South of Kendawagan a ridge of Triassic rocks runs out to form the steep-sided Betujurung promontory and the hills on Bawat and Gelam islands.
The south coast is similar, with a number of cuspate and lobate salients, most of which are swampy protrusions rather than deltas. Sand of fluvial origin has drifted along the shoreline east and west from the mouth of the Siamok, to form the straight spit of Tanjung Bandaran to the east, partly enclosing mangrove-fringed Sampit Bay, and the recurved spit of Tanjung Puting to the west. Near Banjarmarsin, ridges of Cretaceous and Mio-Pliocene rock run through to form the promontory of Selatan where the swampy shores give place to the more hilly coastal country of eastern Kalimantan.
The east coast has many inlets and swamp-fringed embayments, the chief contrast being the large Mahakam delta, formed downstream from Samarinda (Fig. 28). Coarse sandy sediment derived mainly from ridges and valleys in the Samarinda area is prominent in the delta, which has numerous distributaries branching among the swampy islands (Magnier et al. 1975, Allen et al. 1976). Other rivers draining to the east coast open into funnel-shaped tidal estuaries, as at Balikpapan and Sangkulirang, and Berau, Kajau, and Sesayap in the north-east (Tjia 1963); as has been noted, tide ranges are higher on the east coast of Kalimantan than on the south and west coasts. At Balikpapan, shoreline erosion has resulted from the quarrying of a fringing coral reef, but the rate and extent of this erosion have not been documented.
FIG. 27 The tombolo at Pangandaran, southern Java, a depositional isthmus attaching Panenjoan, formerly an island, to the mainland
FIG. 28 The large Mahakam delta, built by fluvial and marine deposition on the east coast of Kalimantan
The coasts of Sulawesi have also received little attention from geomorphologists but it is known that this island has been tectonically active. In contrast with the low-lying swampy shores of Kalimantan there are long sectors of steep coast, often with terraced features indicating tectonic uplift or tilting, especially where coral reefs have been raised to various levels up to 600 metres above present sea level, some of them transversely warped and faulted. Rivers are short and steep, with many waterfalls and incised gorges, and there are minor depositional plains around the river mouths. Fringing and nearshore coral reefs are extensive, and along the shore there are sectors of beach sand, with spits and cuspate forelands, especially in the lee of offshote islands, as at Bentenan. It is likely that progradation is taking place where rivers drain into the heads of inlets and embayments, especially on the east coast, where mangrove fringes are extensive, but no details are available. Volcanic activity has modified coastal features locally, for example on Menado-tua -the active volcano off Menado in the far north of the island- and erosion has been reported at Bahu, but again there are no detailed studies. South and south-east of Sulawesi there are many uplifted reef patches and atolls, as well as islands fringed by raised reef terraces Binongko, for example, has a stairway of 14 reef terraces, the highest 200 metres above sea level (Kuenen 1933), and Muna is a westward-tilted island with reef terraces up to 445 metres above sea level (Verstappen 19601.
Bali and Nusatenggara
The northwestern coast of Bali consists of Pliocene limestone terrain, the shores having yellow beach sands and some fringing reefs. A lowland behind Gilimanuk becomes a narrowing coastal plain along the northern shore, giving place to a steeper coast on volcanic rocks near Singaraja. Out to the north, the Kangean islands include uplifted reefs and emerged atolls.
In the eastern part of Bali the coast is influenced by the active volcanoes, specially Agung, which generate lava and ash deposits that move downslope and provide a source of sediment that is washed into the sea by rivers, particularly during the wet season (December to April). These sediments are then distributed by wave action to be incorporated in grey beaches. Sanur beach is a mixture of fluvially supplied grey volcanic sand and coralline sand derived from the fringing reef (Tsuchiya 1975,1978). At Sengkidu the destruction of a fringing reef by collecting and quarrying of coral has led to erosion of the beach to the rear, so that ruins of a temple now stand about 100 metres offshore, indicating that there has been shoreline erosion of at least this amount in the past few decades following the loss of the protective reef (Praseno and Sukarno 1977). Similar erosion is in progress on Kuta and Sanur beaches.
South of Sanur, in the lee of the broad sandy isthmus that links mainland Bali to the Bukit Peninsula (of Miocene limestone) to the south, spits partly enclose a broad tidal embayment with patches of mangrove on extensive mudflats. This peninsula has a cliffed coast with caves and notches, stacks rising from basal rock ledges and extensive fringing reefs; beaches occupy coves and south of Benoa beach deposition has resulted in the attachment of a small island to the coast as a tombolo (Plate 11).
West of the isthmus, ocean waves determine the curvature of beach outlines, and there has been erosion in recent decades on either side of the protruding airport runway at Denpasar. The beach here, in the lee of a fringing coral reef, is of pale coralline sand, backed by low dunes. It gives way northwards to grey sand of volcanic origin, with beaches interrupted by low rocky promontories and shore benches. Longshore drifting to the north-west is indicated by spits that deflect stream mouths in that direction (Plate 12), and as wave energy decreases, in the lee of the Semenanjung promontory of south-eastern Java, the beaches become narrower and gentler in transverse gradient.
At the north-western end of Bali the Gilimanuk spit shows several stages of growth from the coast at Cejik, to the south, interspersed with episodes of truncation. Verstappen (1975b) suggested that growth occurred during phases of dominance of westerly wave action and truncation when south-easterly waves were prevalent, the variation being due to wind regimes associated with long-term migrations of the ITC, but stages in the evolution of this spit have not Yet been dated.
Many of the features found on Bali are also found on the similar Lesser Sunda islands to the east, but few details are available. Cliffs of limestone and volcanic rock extend along the southern coasts of Lombok, Sumbawa, and Sumba but elsewhere the coasts are typically steep rather than cliffed and often have fringing coral reefs. There are many volcanoes, some of them active: Inerie and Iya in southern Flores and Lewotori to the east have all erupted in recent times and deposited lava and ash on the coast, as has Gamkonora on Halmahera. Rivers have only small catchments, and depositional lowlands are confined to sheltered embayments, mainly on the northern shores. Terraces and emerged reefs indicative of uplift and tilting are frequently encountered on these eastern islands (Davis 1928). On Sumbawa uplifted coral reefs are up to 700 metres above sea level, attached to the dissected slopes of old volcanoes and, on Timor, reef terraces-much dissected by stream incision-attain 1,200 metres above sea level, the higher ones encircling mountain peaks that were once islands with fringing reefs or almost-atolls with reefs enclosing lagoons that had a central island.
Chappell and Veeh (1978) have examined raised coral terraces on the north coast of Timor and on the south coast of the adjacent volcanic island of Atauro, where they extend more than 600 metres above sea level. Dating by Th230-U234 established a sequence of shoreline features and fringing reefs developed during Quaternary oscillations of sea level on steadily rising land margins. On Atauro, where the stratigraphy is very well displayed in gorge sections cut through the terrace stairway, the shoreline of 120,000 Years BP is 63 metres above present sea level. Correlation with other such studies, notably in Barbados, New Guinea, and Hawaii, suggests that the world ocean level was then only 5 to 8 metres above the present, which indicates a mean uplift rate of about 0.5 metres per 1,000 years in Atauro. At Manatuto, Baucau, and Lautem in north-east Timor, dating of similar terraces indicates a similar uplift rate but at Hau, just east of D;l;, the shoreline of 120,000 years BP is only 10 metres above sea level indicating a much slower rate of land uplift, only 2 to 4 centimetres per 1,000 years.
Another emerged almost-atoll is seen on Rot;, southwest of Timor, where the enclosing reefs have been raised up to 200 metres, the highest encircling interior hills of strongly folded sedimentary rocks. Kissar, north-east of Timor, has a stairway of five reef terraces, the highest at 150 metres above sea level. Leti, to the east of Timor, has been uplifted in two stages to form reef terraces 10 metres and 130 metres above sea level, and similar features are seen at 10 to 20 metres and 200 to 240 metres on the nearby island of Moa. Yamdena is an island bordered by high cliffs of coral limestone cut into the outer margins of a reef that has been raised 30 metres out of the sea.
North of the Banda Sea, Seram has a coral reef 100 metres above sea level, and Ambon, which consists of two islands linked by a sandy isthmus, has reefs at heights of up to 530 metres. Gorong, south-east of Seram, is an atoll uplifted in several stages to 300 metres, and now encircled by a lagoon and a modern atoll reef Obi and Halmahera also have upraised reef terraces up to 300 metres above sea level. In the Aru Islands, Verstappen 11960) described cliffs fronted by shore platforms that had been submerged as the result of tectonic subsidence, but uplifted atolls also occur in this region.
A great deal of research is required to establish the nature of coastal features in eastern Indonesia. Some of the reconnaissance accounts are misleading: cliffs have been taken as evidence of recent uplift, and mangrove-fringed embayments as indications of recent subsidence; and it is possible that too much emphasis has been given to catastrophic events, such as earthquakes, volcanic eruptions, and tsunami, in the interpretation of coastal features.
Tectonic movements have undoubtedly influenced coastal changes in parts of Irian Jaya, both of steep sectors, mainly in the north, and in the extensive swampy lowlands to the south. Verstappen (1964a) compared a 1903 map of Frederik Hendrik island (Yos Sudarso), near the mouth of the Digul River on the south coast, with maps based on air photographs taken in 1945, and found evidence of substantial progradation, which he attributed to recent uplift in a zone passing through Cape Valsch (Fig. 29). Frederik Hendrik Island is mainly low-lying, with extensive reed-swamps, and its bordering channels are scoured by strong tidal currents but the Digul River opens into a broad estuary, and under present conditions it relinquishes most of its sediment load upstream as it traverses extensive swamps and recently subsided areas between Gantentiri and Yondom. In consequence it is not now building a delta into the Arafura Sea.
On the north coast of Irian Jaya the Mamberamo has built a substantial delta, but in recent decades this has shown little growth; indeed, the western shores show creek enlargement and landward migration of mangroves, while the eastern flank is fringed by partly submerged beach ridges with dead trees, all indicative of subsidence (probably due to compaction) and diminished sediment yield from the river. Verstappen (1964a) related this diminished yield to an intercepting zone of tectonic subsidence that runs across the southern part of the delta, marked by a chain of lakes and swamps, including an anomalous mangrove area (Fig. 30). The largest of the lakes, Rombabai Lake, is adjacent to the levees of the Mamberamo, and at one point the subsided levee has been breached during floods and a small marginal delta has grown out into the lake.
FIG. 29 Historical progradation on the island of Yos Sudarso, south-west coast of Irian Jaya. Based on Verstappen (1964a)
FIG. 30 The Mamberamo delta on the north coast of Irian Jaya, showing the transverse zone of subsidence containing Rombobai Lake and freshwater swamps invaded by mangroves from the west (based on Verstappen 1964a)
The islands west of Irian Jaya show evidence of tectonic movements, Waigeo being bordered by notched cliffs of recently uplifted reef limestone, while Kafiau is essentially an upraised almost-atoll with hills of coral limestone ringing an interior upland.
In September 1979 a major earthquake (force 8 on the Richter scale) disturbed the islands of Yapen and Biak, north of Irian Jaya, initiating massive landslips on steep coastal slopes, especially near Ansus on the south coast of Yapen. According to the United States Geological Survey it was the strongest earthquake in Indonesia since the August 1977 tremor on Sumbawa which had similar effects. Tsunami generated by these and other earthquakes were transmitted through eastern Indonesia but there have been no detailed studies of their geomorphological and ecological consequences.
This review of Indonesian coastal features has indicated the variety of forms that exist within this archipelago, the bestdocumented sectors being the north-eastern coast of Sumatra and the north coast of Java, both of which show evidence of substantial changes within historical times. It is hoped that geomorphological studies will soon provide much more information on the other sectors, which at this stage are poorly documented and little understood.
Contents - Previous - Next |
Just over a decade ago an accidentally introduced beetle, the Emerald Ash Borer, arrived in Eastern United States, leaving residents and natural resource managers in dismay as they watched the insect killed millions of Ash trees.
The news of the beetle's arrival in Boulder filled the state's newspapers in September 2013 and left city and state managers with a similar feeling of despair. Since then, Boulder has established a county-wide quarantine on the moving of Ash firewood and has surveyed the boundaries of the infected area. The beetle is likely to kill tens of thousands of trees in Colorado and could cost the state millions of dollars in prevention and mitigation.
A study in the Midwest and Eastern United States estimated that the beetle will kill over 17 million ash trees between 2009-2019 and projected that eradication costs will exceed $10 billion. Further, the current management options such as insecticide treatment and preemptive tree removal have failed in other states. As Dr. Whitney Crenshaw of Colorado State University points out, "once the beetle is established there is no chance of eradication."
Given this, I propose that we accept that the beetle is here to stay and that we avoid the use of insecticides and preemptive tree removal, which degrade our environment and have negative economic consequences for our state.
Let's be progressive and follow our own path, learning from other states. For starters, we should not apply countless pounds of insecticide to our communities. Insecticides used to mitigate the beetle can impact our groundwater supply, cause unintentional mortality to non-targeted insects, and when the insecticides are applied to the soil, they have the potential to affect other nearby vegetation by reducing the plants' ability to photosynthesize — some of which native pollinators depend upon.
Additionally, not only does preemptive tree removal not work, it also may disrupt current bird populations. The front range is home to many bird species that rely on snags, standing dead trees with hollows, for critical nesting and breeding habitat. Leaving low-risk dead Ash trees standing in wooded areas could benefit birds with no cost to us. Also, the increase in beetles could provide additional food for birds.
Finally, instead of focusing solely on the negative economic impacts of the invasion, let us consider the economic upshots. If a tree dies, it isn't going to "waste." The tree can often be sold to lumber companies to be processed, turned into art or furniture, or used as firewood — all of which provide positive, economic benefits. The Colorado Department of Agriculture estimates that only 15 percent of trees in urban forests are ash trees; this is a relatively small number and doesn't affect the state's revenue generating timber stands. The invasion of the beetle isn't the end of the road for Colorado's forests and it is important we stay focused on the "big picture" and avoid attempts at migration that are both doomed to fail and could have detrimental effects on the health of our communities.
Brent Pease attends Colorado State University majoring in Wildlife Biology. He lives in Fort Collins. |
Artificial Intelligence (AI) is both the intelligence of machines and the branch of computer science which aims to create it. Skynet is a (fictional) example of AI, while Watson is a real-world example of AI.
Major AI textbooks define artificial intelligence as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."
Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of some AI research.
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic. AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.
Other names for the field have been proposed, such as computational intelligence, synthetic intelligence, intelligent systems, or computational rationality. These alternative names are sometimes used to set oneself apart from the part of AI dealing with symbols (considered outdated by many, see GOFAI) which is often associated with the term “AI” itself.
In artificial intelligence research, GOFAI ("Good Old-Fashioned Artificial Intelligence") is an approach to achieving artificial intelligence. In the robotics research, the term is extended as GOFAIR ("Good Old Fashioned Artificial Intelligence and Robotics"). The approach is based on the assumption that many aspects of intelligence can be achieved by the manipulation of symbols, an assumption defined as the "physical symbol systems hypothesis" by Alan Newell and Herbert Simon in the middle 1960s. The term "GOFAI" was coined by John Haugeland in his 1986 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research.
GOFAI was the dominant paradigm of AI research from the middle fifties until the late 1980s. After that time, newer sub-symbolic approaches to AI became popular. Now, both approaches are in common use, often applied to different problems.
Opponents of the symbolic approach include roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering.
While Skynet itself is an example of an AI system, its tools (HKs, Terminators, etc.) are also examples. Others include Barbara Chamberlain's ARTIE traffic control system, Zeira Corporation's John Henry (based on Andy Goode's The Turk), and Xander Akagi's Emma program. |
I was watching a movie with my children when the subject of tornadoes came up. Answering their questions, I realized I was a bit rusty on my knowledge of everything concerning tornadoes, so we decided to do a little research to help us better prepare in the case of a tornado.
What exactly is a tornado? A tornado is made of wind. In a tornado, the wind is rotating in a vortex or column. This can uproot trees, sweep up houses, and cause a great deal of destruction. Similar to tornadoes are straight line winds. Straight line winds have the fast speeds of tornadoes without the rotations. They can be just as deadly.
Most tornadoes are very short-lived and cause only weak damage with winds under 110 mph. Strong tornadoes last more than 20 minutes with wind speeds up to 200 mph. Only a small number of tornadoes are considered violent tornadoes, lasting up to an hour with wind speeds greater than 300 mph.
What is a Tornado Watch? If a tornado watch is called in your area, it means that conditions may cause tornadoes to form but no tornadoes have been spotted. Keep an eye on the weather and listen for any further reports.
What is a Tornado Warning? A tornado warning means that a tornado has been spotted. You should take cover immediately.
Tornadoes can strike quickly, giving very little warning. Being prepared and going over emergency procedures can help children feel more secure.
Preparing for a Tornado
Make a plan:
- Speak with all family members and plan where you will go in the event of a tornado. You should choose the safest location in your home. This is generally the lower level of your home, away from windows and doors. Choose an interior room or wall and take shelter under heavy furniture when possible. If you are in your vehicle, leave the vehicle and lay flat in a lower area such as a ditch. In the event of a tornado, do not open windows or doors, as this can allow debris and wind to sweep into the home.
- Discuss ways that family members can prepare and reach the emergency location quickly.
Put together an emergency kit. This should include:
- Non-perishable food. Pack enough non-perishable food for everyone in your household for three days. Stay away from salty foods and anything which increases thirst. If you pack canned goods, don’t forget to include a can opener.
- Water. Have enough drinking water for everyone in your household for 3 days. You may want to include extra for sanitation. Consider one gallon per person per day as your starting point.
- Light. In the event of a power outage, you may be without light. Pack candles and/or a flashlight or lantern. remember to include a way to light the candles or extra batteries.
- Weather radio. Pick up an inexpensive battery operated or hand-crank weather radio to keep update on emergency weather conditions.
- First aid kit. Have a first aid kit easily accessible in the event that someone is hurt.
- Help signal. Include some way to signal others in the event that your home is destroyed. A signal whistle, horn, or flare gun may help alert others to your location.
- Disposable wipes. Disposable wipes can be used for sanitation if you are confined to your location while waiting for help.
- Duct tape. You never know when you may need duct tape.
- Emergency information. Check out this handy information page that you can print out.
- Pet supplies. If you have pets, pack some non-perishable pet food and possibly an emergency leash. Poo bags may also come in handy.
- Trash bags. These can be used as makeshift toilets.
- Entertainment. Consider packing some books, paper, crayons, or other items to help take children’s minds off of the storm.
- Blankets or pillows to keep warm and sleep.
- Any medications required by family members.
- Extra eye glasses or other health care items.
- Feminine hygiene products.
- Extra clothes
- Fire extinguisher
- Important documents
- Road maps
Practice for a tornado:
Most cities where tornadoes are likely to occur test their sirens monthly. If you children are anything like mine, they will have asked about the sirens. While these tests are to make certain that the sirens are working correctly, this is a fantastic opportunity to practice in case of a real tornado. Explain to your children that the sirens are just being tested, but that you want to practice what to do in case of a real tornado. Go through your tornado drill. If there is ever a time when your family needs to
Check out Tornadoes AHEAD: Owlie Skywarn’s Weather Book. This is a free printable coloring book with facts and other information regarding tornadoes.
Looking for books to read with your children? Try requesting some of these at your local library:
The Cat in the Hat and various other Seuss characters are travelling in a hot air balloon where they encounter many types of weather.
Make storms a little less scary by reading about what causes them.
DK provides a valuable reference for children with their book on weather. Beautiful color photgraphy combined with clear information help demystify weather for families.
For the slightly older child, check out DK Eyewitness Books: Weather This book covers the same topics with more depth. |
D. Formation of Galaxies
How galaxies formed after the Big Bang is a question still being studied by astronomers. Astronomers hypothesize that within the first few hundred thousand years after the Big Bang, there were clumps of matter scattered throughout the universe. Some of these clumps were dispersed by their internal motions, while others grew by attracting other nearby matter. These surviving clumps became the beginnings of the galaxies we see today. These first galaxies appeared 12.5 billion years ago.
When a clump becomes massive enough, it starts to collapse under its own gravity. At this point, the clump becomes a protogalaxy. Astronomers hypothesize that protogalaxies consist of both dark matter and normal hydrogen gas. Due to collisions within the gas, the hydrogen loses energy and falls to the central region of the protogalaxy. Because of the collisions of the gas, protogalaxies should emit infrared light. The dark matter remains as a halo surrounding the protogalaxy.
Astronomers think that the difference in appearance between elliptical and spiral galaxies is related to how quickly stars were made. Stars form when gas clouds in the protogalaxy collide. If the stars are formed over a long period of time, while some stars are forming, the remaining gas between the stars continues to collapse. Due to the overall motion of matter in the protogalaxy, this gas settles into a disk. Further variations in the density of the gas result in the establishment of "arms" in the disk. The result is a spiral galaxy. If, on the other hand, stars are made all at once, then the stars remain in the initial spherical distribution that the gas had in the protogalaxy. These form an elliptical galaxy.
Astronomers also think that collisions between galaxies play a role in establishing the different types of galaxies. When two galaxies come close to each other, they may merge, throw out matter and stars from one galaxy, and/or induce new star formation. Astronomers now think that many ellipticals result from the collision of galaxies. We now know that giant ellipticals found in the center of galaxy clusters are due to multiple galaxy collisions.
Recommended Summary Activities: The Universe as Scientists Know It and Seeing as Far as You Can See |
Carcinoid tumors are rare malignancies first described over 100 years ago. The term “carcinoid” was replaced with “well-differentiated endocrine neoplasm” by the World Health Organization in the year 2000; more appropriately descriptive of these typically slow-growing neuroendocrine tumors (NETs). The majority of NETs occur sporadically and are nonhereditary, with the ability to secrete various vasoactive substances.1 Despite their sporadic occurrence, carcinoid tumors may be associated with hereditary syndromes like multiple endocrine neoplasia (MEN); the mutation of the MEN1 gene being the most common somatic mutation in sporadic tumors like carcinoids. For example, some of the foregut tumors may show a loss of heterozygosity at 11q13. Poorly differentiated NETS may show loss of heterozygosity for p53 or the adenomatous polyposis coli tumor suppressor gene.2
Carcinoid tumors are rarely found in adults with an estimated annual incidence of 5 per 100,000; which reflects significant increase over the past 15 years.3,4 The incidence in children and adolescents is lower at 2.8 NETs per million under the age of 30 years.5 Despite their low numbers, carcinoid tumors represent the most frequent tumor of the gastrointestinal tract in children6 and the most frequently diagnosed primary pulmonary tumor in children and adolescents.5,7,8 These tumors are often classified based on their embryonic gut origin. The foregut is the precursor site for bronchial and gastric tumors; the midgut is the precursor site for small intestinal and appendiceal tumors, and the hindgut, the precursor site for rectal tumors. Nearly 68% of carcinoid tumors develop in the gastrointestinal tract, with the appendix being the most common location, and another 25% occurring in the bronchopulmonary system.9 Other sites of occurrence include the thymus, gonads, breast, and other areas within the gastrointestinal tract.5,10
There have been several series investigating carcinoid and the other NETs in pediatric patients. Spunt et al11 analyzed pediatric patients at St Jude over a 22-year period and found 8 patients treated there for carcinoid tumors. These lesions were more likely to be found in female patients (75%) and whites (87.5%). The median age of presentation was 12.7 years. In a case series of childhood carcinoid tumors in Brazil over an 11-year period, 9 patients were found to have carcinoid tumors. 66.6% were female patients, and mean age was 12.2 years. Locations for the tumors were the appendix (n=8) and bronchus (n=1).12 Similarly, an Austrian review of appendiceal carcinoid tumors in children found 36 patients diagnosed with these tumors over a 30-year period. Again, the tumors were more likely to be found in female patients (69.4%), and the median age at diagnosis was 12.3 years.13 Dall’Igna et al14 reviewed appendiceal carcinoid tumors in Italy over a period of 15 years. Fourteen patients were diagnosed with appendiceal carcinoid tumors during this time. The median age was 13.5 years, and 64.3% were female. Over a period of 50 years, 22 patients under 20 years of age presented with appendiceal carcinoid tumors at MD Anderson. The mean age at presentation was 14.6 years.6 A recent review of SEER data for the period 1975 to 2006 confirms these findings with 28% of all NETs in the lung, 18% in the breast, and 18% in the appendix. The SEER data confirm the female:male predominance of 2:1.5 Analysis of SEER data demonstrated that breast, ovarian, and cervical NETs account for the increased female incidence.5
Investigating bronchial carcinoid tumors in pediatric patients, Wang et al8 reviewed a 59-year period during which time 17 patients between 10 and 21 years of age were diagnosed with bronchial carcinoid tumors. The average age at diagnosis was 17 years, with a median duration of symptoms of 8.5 months.
Broaddus et al15 reviewed cases of 13 NETs in extra-appendiceal sites during childhood and adolescence: 8 tumors were carcinoid tumors and 5 were classified as neuroendocrine carcinoma. Of the carcinoid tumor patients, 62.5% were female, whereas all of the neuroendocrine carcinoma patients were male. The mean age at presentation was 12.7 years.
Symptoms associated with carcinoid tumors are related to the location, size, and extent of spread of the tumor. In pediatric patients, appendiceal carcinoids are often found incidentally, but occasionally they are associated with lower quadrant abdominal pain and other symptoms of acute appendicitis.13,14,16,17 In a series of 23 patients with appendiceal carcinoid tumors, 18 patients presented with symptoms of an acute abdomen.16 In another series of 36 patients, acute right lower quadrant pain was present in 75%, and chronic right lower quadrant pain was present in the other 25%. Just over one third of the patients had appendicitis at the time of appendectomy.13 Moertel et al17 reviewed 150 patients with appendiceal carcinoid tumors and found that tumors that were smaller than 2.0 cm in greatest dimension were not associated with any metastatic disease. The incidence of metastatic disease increased with an increased size of tumor. Younger age seemed to be associated with larger tumors and greater risk of metastases.
Bronchial carcinoids may present with signs of cough, dyspnea, hemoptysis, or pleuritic pain. In a series of 17 patients by Wang et al8 symptoms included wheezing, hemoptysis cough, dyspnea, and chest pain (20% to 30% of patients), weight loss of 3 to 14 kg over a period of weeks to months (30% of patients), and a hoarse voice in 1 patient. Nearly 50% of patients presented with pneumonia in which no organism was identified. No patient was completely asymptomatic at the time of diagnosis.
The classic “carcinoid syndrome,” which consists of some combination of wheezing, flushing, diarrhea, hypotension, and/or abdominal pain, is rare in pediatric patients. This is because most young patients with carcinoid tumors do not have metastatic disease to the liver. Symptoms of carcinoid syndrome are related to the variable catecholamine expression of carcinoid tumors. Specific substances that may be secreted alone or in combination include 5-serotonin (5-HT), adrenocorticotropic hormone, substance P, gastrin, catecholamines, and 5-hydroxytryptophan (5-HTP).18 In general, foregut tumors have a low content of serotonin (5-HT) but often secrete its precursor, 5-HTP. They also often secrete histamine and many different polypeptide hormones. Midgut carcinoids typically have high serotonin content and rarely secrete the precursor, 5-HTP. These tumors may also produce adrenocorticotropic hormone and vasoactive substances such as kinins, prostaglandins, substance P, and neurokinin A. Hindgut carcinoids rarely secrete vasoactive substances, serotonin, or 5-HTP, thus making them less likely to be associated with carcinoid syndrome.18–20
One of the more serious side effects of carcinoid is carcinoid heart disease.21–23 When carcinoid tumors secrete vasoactive substances, the substances are usually inactivated by the liver. However, when a patient has disease in the liver itself, the vasoactive substances are not inactivated and are allowed to reach the systemic circulation The vasoactive substances are able to travel through the circulation, reaching the right heart, which is associated with fibrous tissue that deposits on the endocardial surfaces of the heart. Initial studies investigating carcinoid heart disease reported a rate as high as 70% in patients with carcinoid syndrome, but more recent studies suggest that the current rate of carcinoid heart disease may be lower for reasons that are not quite clear.23 Symptoms of carcinoid heart disease are related to the signs and symptoms of right heart failure. Patients may present with worsening shortness of breath, fatigue, and lower extremity edema. The right heart valves are damaged by the vasoactive substances, resulting in a combination of tricuspid regurgitation and stenosis. The pulmonary valve is less often involved, and left-sided lesions may occur in up to 10% to 15% of cases, thought to be due to a patent foramen ovale, bronchial carcinoid, or high levels of vasoactive substances. The presence of carcinoid heart disease has been shown to shorten survival in those patients with metastatic disease.23
The World Health Organization has provided a recent classification of NETs, to include 5 major categories: well-differentiated endocrine tumors (benign or low-grade malignancy), well-differentiated endocrine carcinomas, poorly differentiated endocrine carcinomas (small cell carcinomas), mixed exocrine and endocrine carcinomas, and tumor-like lesions. This differentiation is based on the tumor’s histology, tumor size, morphology, and the presence or absence of local invasion or metastases.24 When analyzing the histologic features of carcinoid tumors, they are frequently separated into “typical” and “atypical” groups. By definition, typical histologic features include neuroendocrine differentiation with a classic architecture of clusters of cells in trabecular, insular, or ribbon-like patterns. When the tumors appear more aggressive or poorly differentiated with increased mitotic activity and perhaps limited necrosis, they are considered atypical, and therefore may be more clinically aggressive. Some tumors exhibit more “aggressive” features, such as invasion into the lymphatic or vascular spaces or into the fat surrounding the primary tumor. The histologic pattern of the tumor should be taken into account when making clinical decisions for patient care.
By staining tumor cells for Ki-67, the clinician can get some objective evidence as to the inherent aggressiveness of the tumor, and all carcinoid tumors should be stained for this marker. Well-differentiated tumors tend to have minimal areas of atypical cytology and <2% Ki-67–positive cells. Alternatively, poorly differentiated carcinoid tumors have more malignant potential with more necrosis and atypia present, with closer to 15% of cells showing Ki-67 positivity. Other tumor biology markers such as CD-44 and nm-23 have also shown some association with more aggressive and more malignant carcinoid tumors.25,26 In general, these well-differentiated NETs have positive immunohistochemical staining for chromogranin A, synaptophysin, and neuron-specific enolase. However, the embryonic origin of the tumor may affect some of its staining features. For example, midgut carcinoids are likely to be argentaffin positive, whereas foregut and hindgut tumors will stain argentaffin negative.27
Certain chromosomal alterations may be clues for the clinician that a more significant endocrine disorder, such as MEN is present. For example, some of the foregut tumors may show a loss of heterozygosity at 11q13, leading the provider to think that MEN type 1 may be present. Other poorly differentiated tumors may show loss of heterozygosity for p53 or the adenomatous polyposis coli tumor suppressor gene.25
In pediatric and young adult patients, the insidious presentation of carcinoid tumors make it difficult to get “baseline” levels of typical carcinoid tumor markers to follow long term. However, if carcinoid syndrome or a carcinoid tumor is suspected based on clinical symptoms or imaging studies, it is best to obtain several laboratory studies before proceeding with surgical biopsy or resection. Chromogranin A is thought to be one of the most common and reliable serum tests for carcinoid tumors. It is thought to be elevated in as many as 80% of patients with carcinoid cancer, and often an elevation of chromogranin A may predict a radiologic or clinical relapse of the disease. The chromogranin A elevation is very specific for disease recurrence, but unfortunately has a sensitivity of around 63%, with higher levels found in secreting tumors and in patients with metastatic disease.20 Pancreastatin has also been analyzed as a reliable serum test for carcinoid tumors. It has been shown to be an effective marker in the follow-up of patients who require hepatic artery chemoembolization and for those patients who may have metastatic disease.28–30
The most useful urine test is a 24-hour urine collection for 5-hydroxyindoleacetic acid, which is a metabolite of 5-HT. When performed correctly, elevated levels of 5-hydroxyindoleacetic acid in urine is specific (100%) but not very sensitive (35%) in patients with disease due to the manner in which levels are influenced by foods such as bananas, avocados, walnuts, pineapples, etc.31 Other groups have advocated looking for other “tumor markers” when evaluating for carcinoid disease, to include serotonin, gastrin, neuron-specific enolase, and neurokinin A.20 Some of these laboratories are of more benefit than others, depending on the location of the tumor and the patient’s symptoms at presentation. For example, gastrin may only be useful in evaluating masses that may arise from enterochromaffin-like cells, whereas none of the tests may be useful in nonsecreting hindgut tumors.
The types of studies used to detect this type of tumor depend largely on the location of the lesion and the associated symptoms of the patient. Bronchial carcinoid tumors are likely to be discovered on computed tomography (CT) or magnetic resonance imaging (MRI) imaging. Rectal or gastric carcinoids are most likely to be visualized directly by endoscopy procedure, unless they are large enough in size to be detected by more routine imaging. Small intestinal tumors are the most difficult to localize by imaging studies and are usually only found in this manner after they have metastasized to other regions. Fortunately, the majority of pediatric and young adult carcinoid tumors are found incidentally during appendectomy or evaluation for other disease processes. In these cases, once a patient has been diagnosed with a carcinoid tumor, the patient must receive a thorough evaluation, not only of the primary tumor location but also for any potential metastatic disease. Local evaluation usually consists of a CT or MRI of the primary tumor site. Ideally, CT imaging should be done with a multidetector CT because it has high temporal and spatial resolution, high-quality reformatted images, and a precise timing of the scan through bolus-tracking capability. For the liver in particular, the best imaging will include dual-phase imaging with both arterial and portal venous phases. Often, metastatic disease to the liver will show early enhancement with washout during the portal venous phase because the tumor is so hypervascular.32 For metastatic disease, the most common and easy test with an excellent positive predictive value is somatostatin receptor scintigraphy (considered the imaging study of choice, especially in those with gastroenteropancreatic NETs).33 111In-DTPA-octreotide combined with CT (single-photon emission computed tomography/CT) can be used to localize disease anywhere in the body. Recent studies have looked at combining a newer somatostatin analog, 68Ga-DOTA-tyrosine3-octreotide (DOTATOC), with positron emission tomography imaging. This form of imaging has shown a sensitivity of 97%, specificity of 92%, and an accuracy of 96% in patients with suspected NETs imaged with this modality. The detection rate with 68Ga-DOTATOC positron emission tomography/CT is higher than with 111In-DTPA-octreotide single-photon emission computed tomography/CT. Unfortunately, the availability of this test is limited.34
The specific treatment regimen used for carcinoid cancer is influenced by the tumor location, but invariably all these tumors need some sort of surgical resection. The patient’s best chance for cure comes from a complete resection of all known disease. Patients who present with larger tumors, more aggressive features on histology, residual disease, or metastatic disease may require more close monitoring and potentially further treatment. One thing that must be kept in mind when performing a surgical procedure on any patient with known carcinoid tumors is that a “carcinoid crisis” may be induced by anesthesia or significant stress to the patient. Having an experienced anesthesiologist, aware of the potential for a carcinoid crisis is imperative to ensure that the patient is prepared adequately for surgery.
Small appendiceal tumors make up 18% of disease found in pediatric and adolescent patients, and these tumors are typically discovered when the appendix is being removed for some other reason. In cases where the tumor is small (<1.5 cm in size), without atypical or invasive histologic features, and without positive surgical margins, primary appendectomy is a sufficient therapy. However, if the tumor shows more aggressive features (atypical histology, size ≥2 cm) or has positive surgical margins, a second surgery should be performed. In pediatric patients, many surgeons will perform an ileocecal resection, whereas in adult patients most surgeons perform hemicolectomies. In both groups, it is not uncommon to recommend regional lymph node sampling. It is well established that patients with tumors ≥2 cm in size are more likely to have regional or even metastatic spread of disease.
The method of resection of bronchial tumors varies based on the tumor location and surgeon performing the procedure. An earlier review of pediatric bronchial carcinoid tumor patients by Wang et al8 showed that the majority of patients were treated with some variation of a lobectomy (13 of the 17 patients), whereas the others had lesions that were best treated by other means due to their specific anatomic location. Brokx et al35 has recently proposed an initial bronchoscopic treatment for those patients with intraluminal bronchial carcinoids. This approach may be more “tissue sparing” and still leads to a high cure rate from surgery alone. In their treatment of 72 adult patients, 37.5% had a complete response after an initial bronchoscopic treatment strategy, and another 11% had a complete response after a second bronchoscopic treatment. Of these 33 patients, 2 experienced recurrence of their diseases, which were then easily treated by an open surgical procedure.
The use of somatostatin analogs has shown to be of some help in patients with residual disease and significant symptoms of carcinoid syndrome. Treatment regimens vary from subcutaneous injections administered daily to monthly injections of long-acting depopreparations of somatostatin. Interferon-α has also been used in patients with more indolent disease, but the side effects of the therapy are usually poorly tolerated, often leading to a lower quality of life for a patient with a relatively slow-growing tumor.36 Chemotherapy in general is relatively unhelpful for patients with carcinoid tumors. Those who may benefit from therapy include those with a large tumor burden and more aggressive tumors as determined by histology. For example, tumors with higher levels of Ki-67 may respond better to cytotoxic chemotherapy than those with Ki-67 levels <2%. There have been many chemotherapeutic agents tried for treatment in adults; they include 5-fluorouracil, cisplatin, doxorubicin, dacarbazine, and various combinations of these agents. None however, have shown an extended response to treatment. The median length of response to treatment is typically 3 to 6 months.37
Various medical treatments have been tried in adults with residual or metastatic carcinoid disease. One of the most common sites of metastatic disease and greatest source of substance production leading to carcinoid syndrome and future carcinoid heart disease is the liver. There have been several methods proposed to reduce the bulk of tumor that may be present in the liver, ranging from simple tumor excision, liver transplantation, systemic chemotherapy agents, and hepatic embolization. Clinicians may use selective arterial embolization or chemoembolization for patients with significant liver metastases. Chemoembolization has been shown to improve symptoms in more than 50% of patients with relatively mild side effects from the therapy. Difficulty arises when patients have tumor burden of >60% of liver mass by Doppler ultrasound, because a significant amount of tumor necrosis may lead to a compensatory release of vasoactive substances leading to a carcinoid crisis.38,39 Also, the duration of benefit can be quite short, ranging from 4 to 24 months.40,41 One of the most promising therapies for carcinoid tumor treatment involves radionuclear treatment coupled with somatostatin analogs. Many groups have developed different radioactive analogs, coupling octreotide with 111Indium, Yttrium 90 DOTATOC, and 177Lu-octreotate to name a few. Each analog has shown some success in patients with difficult-to-treat disease, but none have been actively studied in pediatric patients.42–44 A phase I study children and young adults who have tumors that are positive for somatostatin receptors by somatostatin receptor scintigraphy is currently underway.
The North American Neuroendocrine Tumor Society recently published guidelines to improve NET disease management. Although these guidelines include specific details for management of well differentiated, poorly differentiated, and more unusual forms of NETs in adults, they can also serve as a great reference to any provider with an unusual pediatric, adolescent, or young adult case.45
Although carcinoid tumors are rare in pediatric and adolescent patients, they do occur and may be associated with significant morbidity. The majority of these patients will be cured completely by surgical resection. These patients with small, localized disease still deserve adequate follow-up to assess for the possibility of recurrence. Disease presentation is often insidious so that it is difficult to get “baseline” levels of typical carcinoid tumor markers to follow long term. For those with more extensive metastatic disease at presentation, aggressive treatment of their tumors with surgical resection when possible, somatostatin analogs, hepatic chemoembolization, and perhaps novel therapeutic agents such as radioactive analogs may be indicated. It is recommended that all patients with carcinoid tumors have a history and physical examination after surgery, with tumor marker monitoring, and appropriate local area imaging studies (CT, MRI). Some patients may require an Octreoscan at diagnosis as well (nonappendiceal tumors). Patients with ≤2 cm appendiceal tumors generally require no further follow-up. Patients with rectal tumors ≥2 cm will require proctoscopy at regular intervals to be determined by individual patient profile. Proctoscopy is recommended at 6 and 12 months, then as clinically indicated. Most other patients with larger tumors or tumors in other locations will require follow-up with history/physical, tumor markers, and/or imaging studies on a regular interval within the first 3 years of diagnosis and subsequently as clinically determined.
Patients of all ages, who have carcinoid tumors, especially in association with carcinoid syndrome, are at risk for carcinoid heart disease and should be monitored appropriately. This is especially so for patients with metastatic disease. Additional imaging recommended for heart disease includes triple phase technique CT and MRI. In the absence of any evidence of metastatic disease, observation, screening for tumor markers, and imaging are recommended 3 to 6 months, or till disease shows up. Enrollment in clinical trials is a palliative measure that remains an option for patients with unresectable metastatic disease.
The sporadic occurrence of carcinoid tumors make primary prevention difficult, with logistic and cost implications associated with any attempts at screening for this rare cancer. Similarly, their occurrence in unusual sites results in missed diagnoses, inadvertent neglect and therefore delayed intervention.
In pediatric and young adult patients, it is rare for the provider to suspect carcinoid cancer before the tumor has been resected, so that at best, secondary prevention may be achieved with complete resection of tumors that are localized at diagnosis. Knowledge of the association of carcinoid tumors with MEN1 may aid clinicians in anticipating carcinoid tumors and institute early interventions. Genetic counseling is of relevance in this group of patients. In patients with functioning carcinoid tumors with the classic carcinoid syndrome, long-acting release octreotide is recommended for chronic prevention.
With the increased awareness of medical staff and potential increase in incidence of carcinoid tumors, clinicians need to be knowledgeable about diagnosis, treatment, and management options for these patients with a tumor that sometimes presents a challenging management. Clinical trials remain an effective means to improve management of tumors like carcinoids.
1. Hamilton SR, Aaltonen LAE World Health Organization Classification of Tumours. Pathology and Genetics of Tumours of the Digestive System. 2000 Lyon IARC Press:77–82
2. Oberg K. Diagnosis and treatment of carcinoid tumors. Expert Rev Anticancer Ther. 2002;3:863–877
3. Yao JC, Hassan M, Phan A, et al. One hundred years after “carcinoid”: epidemiology of and prognostic factors for neuroendocrine tumors in 35,825 cases in the United States. J Clin Oncol. 2008;26:3063–3072
4. Modlin I, Oberg K, Chung D, et al. Gastroenteropancreatic neuroendocrine tumours. Lancet Oncol. 2008;9:61–72
5. Navalkele P, O’Dorisio M, O’Dorisio TM, et al. Neuroendocrine tumors in children and young adults
: incidence, survival, and prevalence in the United States. Pancreas. 2010;29:278
6. Corpron CA, Black CT, Herzog CE, et al. A half century of experience with carcinoid tumors in children. Am J Surg Pathol. 1995;170:606–608
7. Brandt B III, Heintz SE, Rose EF, et al. Bronchial carcinoid tumors. Ann Thorac Surg. 1984;38:63–65
8. Wang LT, Wilkins EW Jr, Bode HH. Bronchial carcinoid tumors in pediatric patients. Chest. 1993;103:1426–1428
9. Modlin IM, Lye KD, Kidd M. A 5-decade analysis of 13,715 carcinoid tumors. Cancer. 2003;97:934–959
10. Modlin IM, Shapiro MD, Kidd M. An analysis of rare carcinoid tumors: clarifying these clinical conundrums. World J Surg. 2005;29:92–101
11. Spunt SL, Pratt CB, Rao BN, et al. Childhood carcinoid tumors: the St Jude Children’s Research Hospital experience. J Pediatr Surg. 2000;35:1282–1286
12. Neves GR, Chapchap P, Sredni ST, et al. Childhood carcinoid tumors: description of a case series in a Brazilian cancer center. Sao Paulo Med J. 2006;124:21–25
13. Prommegger R, Obrist P, Ensinger C, et al. Retrospective evaluation of carcinoid tumors of the appendix in children. World J Surg. 2002;26:1489–1492
14. Dall’Igna P, Ferrari A, Luzzatto C, et al. Carcinoid tumor of the appendix in childhood: the experience of two Italian institutions. J Pediatr Gastroenterol Nutr. 2005;40:216–219
15. Broaddus RR, Herzog CE, Hicks MJ. Neuroendocrine tumors (carcinoid and neuroendocrine carcinoma) presenting at extra-appendiceal sites in childhood and adolescence. Arch Pathol Lab Med. 2003;127:1200–1203
16. Moertel CL, Weiland LH, Telander RL. Carcinoid tumor of the appendix in the first two decades of life. J Pediatr Surg. 1990;25:1073–1075
17. Moertel CG, Weiland LH, Nagorney DM, et al. Carcinoid tumor of the appendix: treatment and prognosis. N Engl J Med. 1987;317:1699–1701
18. Jenson RT, Norton JADeVita VT, Hellman S, Rosenberg SA. Carcinoid tumors and carcinoid syndrome. Cancer: Principles and Practice of Oncology. 2001;Vol. 26th ed Philadelphia Pa Lippincott Williams & Wilkins:1813–1826
19. Soga J, Yakuwa Y, Osaka M. Carcinoid syndrome: a statistical evaluation of 748 reported cases. J Exp Clin Cancer Res. 1999;18:133–141
20. Vinik AI, Woltering EA, O’Dorisio TM, et al. Neuroendocrine Tumors. A Comprehensive Guide to Diagnosis and Management. 2006 Inglewood Inter Science Institute:11–12
21. Thorson A, Blorck G, Bjorkman G, et al. Malignant carcinoid of the small intestine with metastases to the liver, valvular disease of the right side of the heart (pulmonary stenosis and tricuspid regurgitation without septal defects), peripheral vasomotor symptoms, bronchoconstriction, and an unusual type of cyanosis; a clinical and pathologic syndrome. Am Heart J. 1954;47:795–817
22. Pellikka PA, Tajik AJ, Khandheria BK, et al. Carcinoid heart disease. Clinical and echocardiographic spectrum in 74 patients. Circulation. 1993;87:1188–1196
23. Bhattacharyya S, Davar J, Dreyfus G, et al. Carcinoid heart disease. Circulation. 2007;116:2860–2865
24. Travis WD, Brambilla E, Muller-Hermelink K, et al. Pathology & Genetics: Tumours of the Lung, Pleura, Thymus, and Heart. 2004 Lyon IARC Press:19–20
25. Oberg K. Carcinoid tumors: molecular genetics, tumor biology, and update of diagnosis and treatment. Curr Opin Oncol. 2002;14:38–45
26. Granberg D, Wilander E, Oberg K, et al. Prognostic markers in patients with typical bronchial carcinoid tumors. J Clin Endocrinol Metab. 2000;85:3425–3430
27. Wells CA, Taylor SM, Cuello AC. Argentaffin and argyrophil reactions and serotonin content of endocrine tumours. J Clin Pathol. 1985;38:49–53
28. Desai DC, O’Dorisio TM, Schirmer WJ, et al. Serum pancreastatin levels predict response to hepatic artery chemoembolization and somatostatin analogue therapy in metastatic neuroendocrine tumors. Regul Pept. 2001;96:113–117
29. Calhoun K, Toth-Fejel S, Cheek J, et al. Serum peptide profiles in patients with carcinoid tumors. Am J Surg Pathol. 2003;186:28–31
30. O’Dorisio TM, Krutzik SR, Woltering EA, et al. Development of a highly sensitive and specific carboxy-terminal human pancreastatin assay to monitor neuroendocrine tumor behavior. Pancreas. 2010;39:279
31. Bajetta E, Ferrari L, Martinetti A, et al. Chromogranin A, neuron specific enolase, carcinoembryonic antigen, and hydroxyindole acetic acid evaluation in patients with neuroendocrine tumors. Cancer. 1999;86:858–865
32. Khanna G, O’Dorisio SM, Menda Y, et al. Gastroenteropancreatic neuroendocrine tumors in children and young adults
. Pediatr Radiol. 2008;38:251–259
33. Gibril F, Jensen RT. Diagnostic uses of radiolabelled somatostatin receptor analogues in gastroenteropancreatic endocrine tumours. Dig Liver Dis. 2004;36(suppl):S106–S120
34. Gabriel M, Decristoforo C, Kendler D, et al. 68Ga-DOTA-Tyr3-octreotide PET in neuroendocrine tumors: comparison with somatostatin receptor scintigraphy and CT. J Nucl Med. 2007;48:508–518
35. Brokx HA, Risse EK, Paul MA, et al. Initial bronchoscopic treatment for patients with intraluminal bronchial carcinoids. J Thorac Cardiovasc Surg. 2007;133:973–978
36. Oberg K, Eriksson B. The role of interferons in the management of carcinoid tumors. Acta Oncol. 1991;30:519–522
37. Kulke MH. Clinical presentation and management of carcinoid tumors. Hematol Oncol Clin North Am. 2007;21:433–455
38. Rickes S, Ocran KW, Gerstenhauer G, et al. Evaluation of diagnostic criteria for liver metastases of adenocarcinomas and neuroendocrine tumours at conventional ultrasound, unenhanced power Doppler sonography and echo-enhanced ultrasound. Dig Dis. 2004;22:81–86
39. Mörk H, Ignee A, Schuessler G, et al. Analysis of neuroendocrine tumour metastases in the liver using contrast enhanced ultrasonography. Scand J Gastroenterol. 2007;42:652–662
40. Gupta S, Yao JC, Ahrar K, et al. Hepatic artery embolization and chemoembolization for treatment of patients with metastatic carcinoid tumors: the M.D. Anderson experience. Cancer J. 2003;9:261–267
41. Ruszniewski P, Rougier P, Roche A, et al. Hepatic arterial chemoembolization in patients with liver metastases of endocrine tumors. A prospective phase II study in 24 patients. Cancer. 1993;71:2624–2630
42. Waldherr C, Pless M, Maecke HR, et al. Tumor response and clinical benefit in neuroendocrine tumors after 7.4 GBq (90)Y-DOTATOC. J Nucl Med. 2002;43:610–616
43. Kwekkeboom DJ, Mueller-Brand J, Paganelli G, et al. Overview of results of peptide receptor radionuclide therapy with 3 radiolabeled somatostatin analogs. J Nucl Med. 2005;46(suppl):62S–66S
44. van Essen M, Krenning EP, Bakker WH, et al. Peptide receptor radionuclide therapy with 177Lu-octreotate in patients with foregut carcinoid tumours of bronchial, gastric and thymic origin. Eur J Nucl Med Mol Imaging. 2007;34:1219–1227
45. . Guidelines. Pancreas. 2010;39:705–800 |
Hydrogen offers a new way to study the Moon
The Moon is a surprisingly strong source of hydrogen atoms. That is the surprise discovery from ESA-ISRO instrument SARA onboard the Indian Chandrayaan-1 lunar orbiter. It gives scientists an interesting new way to study both the Moon and any other airless bodies in the Solar System.
According to conventional wisdom, the lunar surface is a loose collection of irregular dust grains. Any particle that hits it should bounce between the grains and be absorbed. But the new results clearly show that one out of every five protons incoming from the solar wind rebounds from the Moon’s surface. In the process, the proton joins with an electron to become an atom of hydrogen.
“We didn’t expect to see this at all,” says Stas Barabash, Swedish Institute of Space Physics, who is the European Principal Investigator for the (Sub-keV Atom Reflecting Analyzer) SARA instrument, which made the discovery. “It’s an amazing discovery for the planetary scientific community in general and for the lunar science in particular”, says Anil Bhardwaj, who is the Indian Principal Investigator from the Space Physics Laboratory, Vikram Sarabhai Space Centre, Trivandrum. SARA was one of three instruments that ESA contributed to Chandrayaan-1, the lunar orbiter that completed its mission in August 2009 and built jointly by scientific groups from Sweden, India, Japan, and Switzerland.
Although Barabash and his colleagues do not know what is causing the Moon to act as a hydrogen mirror, the discovery paves the way for a new type of picture to be made of the lunar surface. This is because the hydrogen atoms shoot off with speeds of around 200 km/s and so escape without being deflected by the Moon’s weak gravity. Also, because hydrogen is electrically neutral it is not diverted by the magnetic fields in space. So the atoms fly in straight lines from the surface of the Moon, just like photons of light. In principle, each detection can be traced back to its origin and an image of the surface can be made. The areas that emit most hydrogen will show up the brightest.
Barabash and his team are currently analysing the data to see if they can make such pictures, in order to look for so-called lunar magnetic anomalies. Whilst the Moon does not generate a global magnetic field, some lunar rocks are magnetised. These generate magnetic bubbles that deflect incoming protons away into surrounding regions. In a hydrogen image, the magnetic rocks will therefore appear dark.
The incoming protons are part of the solar wind, a constant stream of particles given off by the Sun. They collide with every celestial object in the Solar System but are usually stopped by the body’s atmosphere. On bodies without such a natural shield, for example asteroids or the planet Mercury, the solar wind reaches the ground. The SARA team expects that these objects too will reflect many of the incoming protons back into space as hydrogen atoms.
This knowledge provides timely advice for the scientists and engineers who are readying ESA’s Bepi-Colombo mission to Mercury. The spacecraft will be carrying two similar instruments to SARA and may find that the inner-most planet is reflecting more hydrogen than the Moon because the solar wind is more concentrated closer to the Sun. In the meantime, the SARA team is combing the lunar data for insight, and puzzling over just why the Moon is so good at reflecting hydrogen.
Notes to Editors:
SARA was one of three instruments that ESA contributed to Chandrayaan-1, the lunar orbiter that finished its mission in August 2009. The instrument was built jointly by scientific groups from Sweden, India, Japan, and Switzerland: Swedish Institute of Space Physics, Kiruna, Sweden; Vikram Sarabhai Space Centre, Trivandrum, India; University of Bern, Switzerland; and Institute of Space and Astronautical Science, Sagamihara, Japan. The instrument is led by Principal Investigators Stanislav Barabash, IRF, Sweden, and Anil Bhardwaj, VSSC, India.
This article reflects findings presented in ‘Extremely high reflection of solar wind protons as neutral hydrogen atoms from regolith in space’, by M. Wieser, S. Barabash, Y. Futaana, M. Holmström, A. Bhardwaj, R. Sridharan, M.B. Dhanya, P. Wurz, A. Schaufelberger and K. Asamura, Planetary and Space Science 2009, doi: 10.1016/j.pss.2009.09.012
Figure 'SARA measurements of hydrogen flux on the Moon' reproduced with permission from Elsevier.
For more information:
Detlef Koschny, ESA Chandrayaan-1 Project Scientist
Email: Detlef.Koschny @ esa.int
Erratum to an earlier version of this article:
How the Moon produces its own water still remains a mystery. The SARA discovery is that the Moon reflects a large fraction of the incoming solar wind protons as hydrogen atoms. What happens with the solar wind protons absorbed by the surface is out of the scope of the SARA investigation. |
Learn Place Value with Building Blocks! Use your Duplo blocks together with a dry erase marker to teach place value in a hands-on and effective way! Perfect way to introduce place value to young students!
Older students need to move like this too! Learn how to use interactive number lines in your classroom to teach whole numbers, fractions and decimals. Use this fun and interactive math activity to develop conceptual understanding all students.
A recently discovered set of original Nikola Tesla drawings reveal a map to multiplication that contains all numbers in a simple to use system. The drawings were discovered at an antique shop in central Phoenix Arizona by local artist, Abe Zucca. |
In 2012, U.S. oil production rose by 790,000 barrels per day, the biggest annual increase since U.S. oil production began in 1859. In 2013, the Energy Information Administration expects production to rise yet again, by 815,000 barrels per day, which would set another record. Domestic natural gas production is also at record levels.
What has allowed such dramatic production increases? Innovation in the drilling sector. The convergence of a myriad of technologies—ranging from better drill bits and seismic data to robotic rigs and high-performance pumps—is allowing the oil and gas sector to produce staggering quantities of energy from locations that were once thought to be inaccessible or bereft of hydrocarbons.
The dominance of oil and gas in our fuel mix will continue. The massive scale of the global drilling sector, combined with its technological prowess, gives us every reason to believe that we will have cheap, abundant, reliable supplies of oil and gas for many years to come.
The key findings of this paper include:
- Between 1949 and 2010, thanks to improved technology, oil and gas drillers reduced the number of dry holes drilled from 34 percent to 11 percent.
- Global spending on oil and gas exploration dwarfs what is spent on "clean" energy. In 2012 alone, drilling expenditures were about $1.2 trillion, nearly 4.5 times the amount spent on alternative energy projects.
- Despite more than a century of claims that the world is running out of oil and gas, estimates of available resources continue rising because of innovation. In 2009, the International Energy Agency more than doubled its prior-year estimate of global gas resources, to some 30,000 trillion cubic feet—enough gas to last for nearly three centuries at current rates of consumption.
- In 1980, the world had about 683 billion barrels of proved reserves. Between 1980 and 2011, residents of the planet consumed about 800 billion barrels of oil. Yet in 2011, global proved oil reserves stood at 1.6 trillion barrels, an increase of 130 percent over the level recorded in 1980.
- The dramatic increase in oil and gas resources is the result of a century of improvements to older technologies such as drill rigs and drill bits, along with better seismic tools, advances in materials science, better robots, more capable submarines, and, of course, cheaper computing power. |
The Life and Death of Stars by Kenneth R.Lang
WEB-Rip | WMV @ 1 Mbit/s | 640x360 | WMA Stereo @ 128 Kbit/s 44 KHz | 12 Hours | 9.77 GB
Genre: Astronomy, Astrophysics | Language: English | PDF Included
For thousands of years, stars have been the prime example of something unattainable and unknowable—places so far away that we can learn almost nothing about them. Yet amazingly, astronomers have been able to discover exactly what stars are made of, how they are born, how they shine, how they die, and how they play a surprisingly direct role in our lives. Over the past century, this research has truly touched the stars, uncovering the essential nature of the beautiful panoply of twinkling lights that spans the night sky.
Consider these remarkable discoveries about the stars:
We are stardust: Every atom heavier than hydrogen and a few other light elements was forged at the heart of a star. The oxygen we breathe, the carbon in every cell of our bodies, and practically all other chemical elements are, in fact, stellar ashes.
Light fingerprints: Stars emit light across the entire range of the electromagnetic spectrum. Spectral lines and other features of starlight act like fingerprints to identify what a star is made of, its temperature, motion, and other properties.
Diamonds in the sky: Carbon is the end product of stars that are roughly the size of our sun. When such stars die, they shrink down to an unimaginably dense and inert ball of carbon atoms—a massive diamond in the sky called a white dwarf.
Space weather: Stars produce more than light and heat. Their outermost layer emits a steady stream of charged particles that constitutes a stellar wind. This wind can be strong enough to strip an atmosphere off a nearby planet.
No other large-scale object in the universe is as fundamental as a star. Galaxies are made of stars. Planets, asteroids, and comets are leftover debris from star formation. Nebulae are the remnants of dead stars and the seedbed for a new generation of stars. Even black holes, which are bizarre deformations of spacetime with infinite density, are a product of stars, typically created when a high-mass star ends its life in core collapse and a supernova explosion. And, of course, the sun is a star, without which we couldn’t exist.
Long ago, the magnificence of the star-filled sky and its clock-like motions inspired people to invent myths to explain this impressive feature of nature. Now we understand the stars at a much deeper level, not as legendary figures connected with constellations, but as engines of matter, energy, and the raw material of life itself. And thanks to powerful telescopes, our view of the stars is more stunning than ever.
The Life and Death of Stars introduces you to this spectacular story in 24 beautifully illustrated half-hour lectures that lead you through the essential ideas of astrophysics—the science of stars. Your guide is Professor Keivan G. Stassun of Vanderbilt University, an award-winning teacher and noted astrophysicist. Professor Stassun provides lively, eloquent, and authoritative explanations at a level suitable for science novices as well as for those who already know their way around the starry sky.
Understand Astronomy at a Fundamental Level
Stars are a central topic of astronomy, and because the study of stars encompasses key concepts in nuclear physics, electromagnetism, chemistry, and other disciplines, it is an ideal introduction to how we understand the universe at the smallest and largest scales. Indeed, today’s most important mysteries about the origin and fate of the universe are closely connected to the behavior of stars. For example, the accelerating expansion of the universe due to a mysterious dark energy was discovered thanks to a special type of supernova explosion that serves as an accurate distance marker across the universe. And another enigma, dark matter, may have played a crucial role in the formation of the earliest stars.
Using dazzling images from instruments such as the Hubble Space Telescope, along with informative graphics and computer animations, The Life and Death of Stars takes you to some otherworldly destinations, including these:
Stellar nurseries: Stars form inside vast clouds of interstellar gas and dust, where every phase of stellar growth can often be seen. Take a virtual fly-through of the Orion Nebula, witnessing the dynamism of stellar creation and the immensity of the regions where stars are born.
Planetary nebulae: Mislabeled “planetary” because they were originally thought to involve planets, these slowly expanding shells of glowing gas are the last outbursts of dying stars. They vary widely in shape and color and are among the most beautiful of celestial sights.
Core of the sun: We can’t see into the sun, but sunquakes and other clues reveal the extreme conditions at its center, 400,000 miles below the visible surface. Make an imaginary trip there, viewing the layers that transfer heat from the 15-million-degree Celsius cauldron at the sun’s core.
Protoplanetary systems: Planets form inside disks of gas and dust surrounding young stars. See how newborn planets jockey for position close to their parent stars and how some planets are ejected from the system—a fate that may have befallen planets orbiting our own sun.
Reach for the Stars
Just as fascinating as the places you visit are the observational techniques you learn about. One of Professor Stassun’s research areas is exoplanetary systems—planets orbiting other stars. You investigate the different methods astronomers use to detect inconspicuous, lightless planets lost in the glare of brilliant stars, seen from many light-years away. You also explore the principles of telescopes and light detectors, and you learn about the vast range of the electromagnetic spectrum, the largest part of which is invisible to human eyes—but not to our instruments.
An astronomer’s other tools for understanding stars include the invaluable Hertzsprung-Russell diagram, which tells the complete story of stellar evolution in one information-rich graphic. You compare the sun’s position on this chart with the entire range of other star types that have varying masses, temperatures, and colors.
You also become familiar with the periodic table of elements, discovering how fusion reactions inside stars forge successively heavier atoms, producing some in abundance, temporarily skipping others, and creating everything heavier than iron in the cataclysmic blast of a supernova. Nickel, copper, gold, and scores of other elements important to humans thus owe their existence to the most energetically powerful phenomenon in the cosmos. You see, too, how astronomers use computer models to analyze the rapid sequence of events that leads to a supernova.
“Hitch your wagon to a star,” advised Ralph Waldo Emerson. In other words, reach for the stars! The Life and Death of Stars is your guide to this lofty goal.
01 Why the Stellar Life Cycle Matters
02 The Stars’ Information Messenger
03 Measuring the Stars with Light
04 Stellar Nurseries
05 Gravitational Collapse and Protostars
06 The Dynamics of Star Formation
07 Solar Systems in the Making
08 Telescopes—Our Eyes on the Stars
09 Mass—The DNA of Stars
10 Eclipses of Stars—Truth in the Shadows
11 Stellar Families
12 A Portrait of Our Star, the Sun
13 E = mc2—Energy for a Star’s Life
14 Stars in Middle Age
15 Stellar Death
16 Stellar Corpses—Diamonds in the Sky
17 Dying Breaths—Cepheids and Supernovae
18 Supernova Remnants and Galactic Geysers
19 Stillborn Stars
20 The Dark Mystery of the First Stars
21 Stars as Magnets
22 Solar Storms—The Perils of Life with a Star
23 The Stellar Recipe of Life
24 A Tale of Two Stars
Contact to email: [email protected][dot]com for support all you need !
Download From Secureupload
Download From Uploaded
Download From Rapidgator |
People living with cancer may experience sleeping problems related to the cancer, cancer treatment, emotional factors, or other unrelated medical conditions. Sleeping problems may include hypersomnia, somnolence syndrome, or nightmares. Learn more about insomnia – another sleeping disorder.
Hypersomnia (also called somnolence, excessive daytime sleepiness, or prolonged drowsiness) is a condition with which you may feel very sleepy during the day or want to sleep for longer than normal at night. It can include long periods of sleep (10 or more hours at a time), excessive amounts of deep sleep, and difficulty staying awake during the day. In addition, daytime napping typically does not relieve the excessive sleepiness.
Hypersomnia may interfere with your relationships, prevent you from enjoying activities, and make it difficult to handle daily activities, such as attending doctors' appointments, completing household chores, and managing family or work responsibilities.
Although similar, hypersomnia and fatigue are not the same. Fatigue involves feelings of exhaustion and lack of energy that are not relieved by sleep. However, unlike hypersomnia, fatigue is not associated with excessive daytime sleeping and the inability to stay awake.
The following types of cancer, cancer treatment, and other medical conditions can cause hypersomnia:
- Cancers of the brain and central nervous system (CNS)
- A secondary brain tumor (cancer that has spread to the brain from somewhere else in the body)
- Chemotherapy, such as teniposide (Vumon), pegaspargase (Oncaspar), and thalidomide (Thalomid)
- Other prescription and over-the-counter medications, including some antidepressants, antinausea medications, opioid pain killers (pain medications), sedatives (medications that calm or cause sleep), antihistamines (medications used to treat allergy or cold symptoms), and sleeping pills
- Anemia (low red blood cell count)
- Changes in hormone levels in the body
- Other symptoms of cancer or side effects of cancer treatments, including hypercalcemia (high levels of calcium), hypokalemia (low levels of potassium), hypothyroidism (a condition in which the thyroid gland is underactive and doesn't make enough thyroid hormones), and depression
Relieving side effects–also called symptom management, palliative care , or supportive careâis an important part of cancer care and treatment. Talk with your health care team about any symptoms you experience, including any new symptoms or a change in symptoms.
If possible, hypersomnia is first treated by diagnosing and treating the underlying cause. Often, hypersomnia related to chemotherapy improves after treatment ends. If other medications are causing it, your doctor may be able to substitute a different medication or adjust the dosage. Your doctor may also prescribe stimulant medications to help you stay awake during the day.
The following behavioral strategies may help you manage hypersomnia:
- Sleep a few hours longer at night to avoid excessive sleepiness during the day.
- Exercise daily in the morning or early afternoon, if possible.
- Engage in enjoyable activities that require your full attention, such as spending time with friends, writing letters, or playing with a pet.
- Try to go to sleep and wake up at the same time every day.
- Get out of bed and stay out of bed until bedtime.
- Avoid foods that make you sleepy and heavy meals during the day.
- Avoid alcohol and caffeine.
Somnolence syndrome is a type of hypersomnia associated with cranial radiation (radiation therapy to the head) in children. Symptoms of somnolence syndrome include excessive drowsiness, prolonged periods of sleep (up to 20 hours a day), headaches, low-grade fever, loss of appetite, nausea, vomiting, and irritability. Symptoms usually occur three to 12 weeks after the end of radiation treatment and can last a few days or several weeks.
Nightmares are vivid, frightening dreams that usually cause the person to wake up able to remember part or most of the dream. Most people have nightmares from time to time, but the frequency or vividness of nightmares can increase after a cancer diagnosis and during cancer treatment. Frequent nightmares can lead to a fear of going to sleep, restless sleep, and daytime sleepiness.
Nightmares are often associated with an increase in emotional stress, and they are thought to be a way in which the mind works through unresolved feelings and fears. Sometimes, nightmares are caused by certain medications (such as some antibiotics, iron supplements, opioid pain medications and heart medications), withdrawal from CNS depressants (substances that can slow brain function, such as alcohol, opioids, and some anti-anxiety medications), and uncontrolled pain .
Because having cancer is frightening and stressful, it is normal to experience some nightmares during treatment and recovery. The following tips may help you cope with nightmares:
- Be honest about your fears and feelings, discussing them with a family member or friend early in the day, rather than at night.
- Talk about the nightmares with a family member or friend.
- Find creative ways to express the content or themes of the nightmares, such as writing in a journal or drawing a picture.
- Make up alternative endings or storylines to the nightmares, and visualize them.
Remember that nightmares are not real, and they do not predict the future or cause bad things to happen. If the nightmares become frequent or continue for a prolonged time, cause excessive anxiety, or prevent you from sleeping well, talk with your doctor or seek help from a counselor . |
A type of invisibility cloak has been developed that could dramatically reduce the size of processing chips in future computers.
Photonics is the future of computing. Yet to be perfected, this form of physics is destined to play a central role in tomorrow’s data centres, mobile devices and pretty much everything that will connect the world.
The reason it will dominate is that despite not yet being an ideal way of transferring memory or energy in a manner that suits our current technologies, physicists know they’re on to something.
If only they could find a way to harness their current findings into anything tangible, manageable and ultimately, useable.
The main problem is at a small scale. Photonic chips are the holy grail, replacing today’s silicon-based variants with much faster options that consume less power and, thus, heat up less.
The photonics contained in these future chips could make up billions of devices, each with their own role, much like modern transistors. However unlike transistors, photonic devices don’t work well when they’re bundled beside each other.
They will not work because the light leakage between them will cause “crosstalk”, similar to radio interference. If they are spaced far apart to solve this problem, you end up with a chip that is much too large.
But University of Utah electrical and computer engineering associate professor, Rajesh Menon, and his team have developed a cloaking device to solve that problem.
“The principle we are using is similar to that of the Harry Potter invisibility cloak,” Menon said in a paper published in Nature Communications.
“Any light that comes to one device is redirected back, as if to mimic the situation of not having a neighbouring device.
“It’s like a barrier – it pushes the light back into the original device. It is being fooled into thinking there is nothing on the other side.”
Menon believes the most immediate application for this technology, and for photonic chips in general, will be for data centres similar to the ones used by services like Google and Facebook.
“By going from electronics to photonics, we can make computers much more efficient and ultimately, make a big impact on carbon emissions and energy usage for all kinds of things,” Menon said. “It’s a big impact and a lot of people are trying to solve it.”
A few months ago, a team of researchers took the first step towards a quantum internet, after they successfully teleported a particle of light 6km away over a straight line distance.
Having been teleported via a fibre optic cable across the city of Calgary, this teleportation has set a new record for transferring a quantum state by teleportation.
Soon after, researchers at Pennsylvania State University developed a very sophisticated, high-speed beam-scanning technique that could take printing speeds into overdrive.
By using a space charge-controlled KTN beam deflector – a kind of crystal made of potassium tantalate and potassium niobate – with a large electro-optic effect, the team has been able to increase the speed of 2D and 3D printing by up to 1,000. |
Flooding in the Fraser Valley
All over the world, climate change is causing water levels to rise. This increases the risk of flooding, in Surrey and across the South Fraser region. Sudden and heavy rains can also cause flooding.
On a farm, flooding can cause:
- Lost crops.
- Loss of oxygen and nutrients in soil.
- Gravel left behind on land.
The Fraser River and its water levels are the region’s biggest concerns when it comes to flooding. This is because so much agricultural land lines the riverbanks. Local governments are working together on a regional flood strategy, and improving the river’s system of dykes. For example, the City of Chilliwack has spent more than $9 million on dyke upgrades in the last 10 years, covering nearly 20 km, or half of the dyking system.
How to Prepare for Flooding
Before you lease land, look into its flood history. Pay a visit to city hall, and find out whether the property is on a floodplain – and what measures your municipality has in place for flood protection and preparation. Learn about Surrey’s floodplain areas.
To protect your South Fraser farmland from flooding:
- Watch the weather closely, and keep up with flood warnings in Surrey, Chilliwack, Langley, and Abbotsford, and across BC.
- Build flood-preventing infrastructure, such as flood walls and wells, and reduce tilling.
- Plant cover crops. These are crops you plant in addition to your main crop, to protect against erosion and flooding. Examples include red clover, rye, and oats. In some cases, cover crops also improve soil health, prevent pests and diseases, and increase yields. The U.S. organization Sustainable Agriculture Research and Education has more information on the benefits of cover crops.
- Review the Fraser Basin Council literature on Flood Management.
Recovering from a Farm Flood
A flood can be devastating to a small farm. If you think you can recover some costs, consider replanting. Just be sure the decision is economic, and not emotional. Calculate all replanting costs, including clearing any debris, seeding, planting, pesticides, tilling, and harvesting, and subtract them from the potential earnings of a delayed harvest. Also be certain the soil will dry out enough in time to support growth (soggy soils may interfere with root development). |
Introduction To Civil War Infantry Organization
The smallest fighting unit for the infantry during the Civil War was the company. Companies generally consisted of 100 men on paper but were seldom up to strength due to casualties and illnesses. The staff of a company comprised of a Captain, who commanded, a 1st Lieutenant, 2nd Lieutenant and two Sergeants, and several Corporals. While the company was the smallest unit, it would at times be split up into platoons, sections, and squads, but not for extended periods of time and rarely, if ever, acting as independent commands.
Infantry companies were banded together with other companies to form battalions or regiments. Generally, there were eight companies to a battalion and ten companies to a regiment (the Union sometimes used twelve) and were designated with letters from the alphabet such as "A", "B", "C", "D", etc. (The letter "J" was not used because it looked too much like the letter "I".) Companies often carried the name of the individual or individuals who organized the company or for the place from where they came. For example, Company "G" of the 38th North Carolina Infantry Regiment was also known as the "Rocky Face Rangers". The staff of a regiment included a Colonel who commanded, a Lieutenant Colonel, Major, 1st Lieutenant (acted as an Adjutant), a surgeon, Assistant Surgeon, Quartermaster, Commissary Officer, and a Sergeant Major. The regiment was the primary fighting force for both the Union and the Confederacy.
Regiments were usually grouped together with other regiments to form a brigade. Brigades were commanded by a Brigadier General, and usually, but not always, regiments from the same state were brigaded together. Confederate Brigades were generally known by the name of the Brigadier General who commanded it, such as Scales's Brigade. Scales's Brigade was commanded by Alfred Scales and was comprised of the 13th, 16th, 22nd, 34th, and 38th North Carolina Infantry Regiments. Union Brigades were usually numbered.
When several brigades were grouped together, they formed a division. Major Generals led divisions with Confederate divisions being named for the general who commanded it, such as Wilcox's Division in the Army of Northern Virginia. Union divisions were numbered with Roman numerals.
When several divisions were organized together, they formed a corps. A corps was commanded by a Lieutenant General and could operate independently or operate as part of the larger army, which was their usual role. Like other large Confederate units, Confederate corps were named for their commander, such as Hill's Corps. Confederate Corps in the Army of Northern Virginia also has numbers. Hill's Corps was also known as the Third Corps.
Union Corps were numbered as were the rest of military organizations, except for Armies.
Armies were the largest of all the fighting units during the Civil War and were composed of corps, divisions, brigades, and regiments and included artillery, cavalry, signal corps, and various other units. A Lieutenant General or a General generally led armies. |
In this section we provide basic details of a number of functions that arise in several separate topics in statistics. The include Bessel functions, the Exponential integral function, the Gamma and Beta functions, the Gompertz curve, Stirling's approximation for n! when n is large, and the Logistic function.
Bessel functions occur as the solution to specific differential equations. They are described with reference to a parameter known as the order, n, shown as a subscript. Bessel functions are widely used in engineering applications, but do arise in statistical analysis, particularly in the context of problems involving directional data in two or three dimensions (Bessel functions of the first kind). Bessel functions of the second kind do also arise in statistical analysis, but only rarely.
Mathematical software packages such as MATLab and Mathematica provide support for a full range of Bessel functions, as do "R" and perhaps surprisingly, Excel (usage requires the Analysis ToolPak addin). In all cases the functions are of the form besselT(x,n) where T is the type of Bessel function, typically I or J, x is the point at which the function is evaluated, and n is the order parameter, as described in the section below.
For integer orders Bessel functions can be represented as an infinite series. Order 0 and Order 1 expansions for standard Bessel functions of the first kind are shown below, together with the general expression in terms of the Gamma function. Graphs of Bessel functions of this type are similar to a dampening sine wave, as shown in the diagram below.
and more generally, for all real n≥0 (not necessarily integer):
Bessel function of the first kind, Jn(x). Graph of integer parameter values
The modified Bessel function of the first kind has a very similar expansion for real values of n, and is given by the general expression:
Usage in statistical analysis arises in connection with the von Mises distribution, which is used in directional statistics, and Bessel functions are also used in connection with some forms of spline curve fitting. The graph of this modified form of the function does not oscillate, as the term involving (-1) in the previous expansion is omitted. Graphs of the function for the same set of parameters as above are provided below:
Modified Bessel function of the first kind, In(x). Graph of integer parameter values
The exponential integral function is one of a family of such functions, related to the Incomplete Gamma function, and is used in association with spline curve fitting. See the Mathworld website entry for more details. The integral for the case n=1 is defined as:
and more generally as
The Gamma function, Γ, is a widely used definite integral function and the generalization of factorials to non-integer cases. The standard form of the integral, for real-valued x, is:
For integer values of x: Γ(x)=(x‑1)! and more generally Γ(x+1)=xΓ(x). From these results we have, for example, Γ(3/2)=Γ(1/2)/2=(√π)/2.
A graph of the Gamma function for a range of real x-values is shown below. The Gamma function, as opposed to the Gamma distribution, is not generally provided in integrated statistical packages, but this varies (it is available in SPSS, for example). For mathematical suites, like MATLab and Mathematica, it is a standard function, but in all cases it is recommended that the natural logarithm of the function is evaluated for larger values of x, as overflow is a common problem. See the Mathworld website entry for more details.
Gamma function, real values
It can be expressed as an integral over the interval [0.1] by the formula:
The Gompertz curve is very similar to the Logistic curve in form (see below), in that it is a constrained S-shaped growth function, used in a number of growth models and time series applications. The function is defined as:
Taking natural logs, this may be written as:
The parameter a is the upper limit or asymptote of the curve - in the chart shown below we have set a=1; parameters b and c control the form of the curve. The parameter b dictates where (between 0 and a) the curve crosses through 0 on the x-axis/time axis. With smaller values it crosses closer to a; the parameter c defines the shape of the curve - in the examples below, the red curve is for c=-1, and the cyan curve (almost a straight line) is for c=-0.1.
An alternative form of the Gompertz function, with essentially the same form, is:
The logistic function is a very simple, S-shaped curve, used originally to describe the growth of population over time under resource constraints. The function is widely used in statistics as the basis for the logit transform and in connection with logistic regression and the logistic probability density functions.
A graph of this function (the standard form) is shown below. A version of the logistic that includes shape parameters, similar to those of the Gompertz function, is:
Logistic curve, standard form
Stirling's formula provides a useful approximation for n! when n is large. The usual (most accurate) formulation for n>0 is:
This formula is accurate to within 1% for n>7, with rapid convergence for larger n. Note that if natural logarithms are taken of both sides this formula reduces to:
hence a rough approximation to n! is:
This approximation was derived by Sterling following the work of De Moivre on the Normal approximation to the Binomial distribution, described elsewhere in this Handbook. De Moivre produced the formula:
[ABR1] Abramowitz M, Stegun I A, eds.(1972) Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables. 10th printing, US National Bureau of Standards, Applied Mathematics Series - 55
Mathworld: Bessel and Modified Bessel function of the first kind: http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html/ and
http://mathworld.wolfram.com/ModifiedBesselFunctionoftheFirstKind.html ; Exponential Integral: http://mathworld.wolfram.com/ExponentialIntegral.html ; Gamma function: |
(Biopsy-Lung, Closed Lung Biopsy, Transthoracic Needle Lung Biopsy, Percutaneous Needle Lung Biopsy, Transbronchial Lung Biopsy, Pulmonary Biopsy, Video-Assisted Thoracic Surgery, VATS)
What is a lung biopsy?
A biopsy is a procedure performed to remove tissue or cells from the body for examination under a microscope. A lung biopsy is a procedure in which samples of lung tissue are removed (with a special biopsy needle or during surgery) to determine if lung disease or cancer is present.
A lung biopsy may be performed using either a closed or an open method. Closed methods are performed through the skin or through the trachea (windpipe). An open biopsy is performed in the operating room under general anesthesia.
The various biopsy procedures include:
Needle biopsy - After a local anesthetic is given, the physician uses a needle that is guided through the chest wall into a suspicious area with computed tomography (CT or CAT scan) or fluoroscopy (a type of X-ray “movie”) to obtain a tissue sample. This type of biopsy may also be referred to as a “closed,” “transthoracic,” or “percutaneous” (through the skin) biopsy.
Transbronchial biopsy - This type of biopsy is performed through a fiberoptic bronchoscope (a long, thin tube that has a close-focusing telescope on the end for viewing) through the main airways of the lungs (bronchoscopy).
Thoracoscopic biopsy - After a general anesthetic is given, an endoscope is inserted through the chest wall into the chest cavity. Various types of biopsy tools can be inserted through the endoscope to obtain lung tissue for examination. This procedure may be referred to as video-assisted thoracic surgery (VATS) biopsy. In addition to obtaining tissue for biopsy, therapeutic procedures such as the removal of a nodule or other tissue lesion may be performed.
Open biopsy - After a general anesthetic is given, the physician makes an incision in the skin on the chest and surgically removes a piece of lung tissue. Depending on the results of the biopsy, more extensive surgery, such as the removal of a lung lobe may be performed during the procedure. An open biopsy is a surgical procedure and requires a hospital stay.
Other related procedures that may be used to help diagnose problems of the lungs and respiratory tract include chest X-ray, CT scan of the chest, magnetic resonance imaging (MRI), bronchoscopy, bronchography, chest fluoroscopy, chest ultrasound, lung scan, oximetry, mediastinoscopy, peak flow measurement, positron emission tomography (PET) scan, pulmonary function tests, pleural biopsy, pulmonary angiogram, sinus X-ray, and thoracentesis. Please see these procedures for additional information.
Anatomy of the respiratory system
The respiratory system is made up of the organs involved in the interchanges of gases, and consists of the:
The upper respiratory tract includes the:
Ethmoidal air cells
The lower respiratory tract includes the lungs, bronchi, and alveoli.
What are the functions of the lungs?
The lungs take in oxygen, which cells need to live and carry out their normal functions. The lungs also get rid of carbon dioxide, a waste product of the body's cells.
The lungs are a pair of cone-shaped organs made up of spongy, pinkish-gray tissue. They take up most of the space in the chest, or the thorax (the part of the body between the base of the neck and diaphragm).
The lungs are enveloped in a membrane called the pleura.
The lungs are separated from each other by the mediastinum, an area that contains the following:
The heart and its large vessels
The right lung has three sections, called lobes. The left lung has two lobes. When you breathe, the air enters the body through the nose or the mouth. It then travels down the throat through the larynx (voice box) and trachea (windpipe) and goes into the lungs through tubes called main-stem bronchi.
One main-stem bronchus leads to the right lung and one to the left lung. In the lungs, the main-stem bronchi divide into smaller bronchi and then into even smaller tubes called bronchioles. Bronchioles end in tiny air sacs called alveoli. |
A long-term, large-scale study by Ecosystems Center scientists of salt marsh landscapes in an undeveloped coastline section of the Plum Island Estuary in Massachusetts has shown that nutrients such as nitrogen and phosphorus can cause salt-marsh loss.
Center scientists Linda Deegan, David Johnson and Bruce Peterson and four other scientists are authors of an article that appeared in the journal Nature on October 18, showing results of their nine-year research study. Septic and sewer systems and lawn fertilizers are often the sources of the nutrients that are causing the disintegration.
“Salt marshes are a critical interface between the land and sea,” Deegan says. “They provide habitat for fish, birds, and shellfish; protect coastal cities from storms; and they take nutrients out of the water coming from upland areas, which protects coastal bays from over-pollution.” Losses of healthy salt marsh have accelerated in recent decades, with some losses caused by sea-level rise and development.
“This is the first study to show that nutrient enrichment can be a driver of salt-marsh loss, as well,” says Johnson, a member of the team since the project began in 2003.
This conclusion surprised the scientists, who added nitrogen and phosphorus to the tidal water flushing through the marsh’s creeks at levels typical of nutrient enrichment in densely developed areas, such as Cape Cod and Long Island.
A few years after the experiment began, wide cracks began forming in the grassy banks of the tidal creeks, which eventually slumped down and collapsed into the muddy creek. “The long-term effect is conversion of a vegetated marsh into a mudflat, which is a much less productive ecosystem and does not provide the same benefits to humans or habitat for fish and wildlife,” Deegan says.
Until this study, it seemed that salt marshes had unlimited capacity for nutrient removal, with no harmful effects on the marshes themselves. “Now we really understand that there are limits to what salt marshes can do,” Deegan says. “And in many places along the Eastern seaboard—such as Jamaica Bay in New York, where marshes have been falling apart for years—we have exceeded those limits.”
The disintegration of the nutrient-enriched marsh in this study happened in several stages, the scientists report. In the first few years, the nutrients caused the marsh grass (primarily cordgrass Spartina spp.) along the creek edges to get greener and grow taller, “just like when you add fertilizer to your garden,” Deegan says. This taller grass also, however, produced fewer roots and rhizomes, which normally help stabilize the edge of the marsh creek. The added nutrients also boosted microbial decomposition of leaves, stems, and other biomass in the marsh peat, which further destabilized the creek banks. Eventually, the poorly rooted grass grew too tall and fell over, where the twice-daily tides tugged and pulled it. The weakened creek bank then cracked and fell into the creek.
By year six of the experiment, the scientists started seeing impacts at higher marsh elevations, above the lower creek banks. Three times more cracks, and bigger cracks, emerged at the top of the banks parallel to the creeks, than in a control marsh where no nutrients were added. Eventually, parts of the higher marsh also broke off and slid down toward the creek (which the scientists call the ‘toupee effect,’ because it leaves behind patches of bare, unvegetated mud). All told, at least 2.5 times more chunks of marsh fell into the creeks in the nutrient-enriched marsh than in the control system.
“We honestly did not anticipate the changes we measured,” says Deegan. “Based on prior small-scale experiments, we predicted nutrient enrichment would cause the marsh grass to grow better and remain stable. But when we allowed different parts of the ecosystem to interact with the nitrogen enrichment over time, the small process changes we saw in the first few years resulted in the creek banks later falling apart. This could not have been extrapolated from the smaller-scale, shorter term studies.”
Nutrient enrichment of coastal areas is known to cause harmful algae blooms, which create low-oxygen conditions that kill off marine life. “Now we understand that nutrient enrichment also causes a very important loss of salt marsh habitat for fish and shellfish,” Deegan says. “This is one more reason why we need better treatment of household waste in our towns and cities.” Individuals can help by not using fertilizers on their lawns and gardens. “If you have a green lawn because you are fertilizing it, you are contributing to loss of salt marshes and ultimately of fish,” Deegan says.
This study could not have been accomplished without the cooperation and fore-sightedness of officials from the towns of Ipswich, Mass., and Rowley, Mass., and the Essex County Green Belt Association, the scientists say.
“They recognized the importance of the work,” Johnson says. “They understood that our work would not affect the much larger Plum Island Estuary, since the area manipulated was small relative to the large area of the sound and the marsh is able to process a lot of the nutrients before they get anywhere near the sound. They realized that whatever we discovered would help their towns, and society in general, make better decisions about treating the excessive nutrient enrichment of our coast.”
This study is part of the Plum Island Ecosystem Long-Term Ecological Research (PIE-LTER) program, supported by the National Science Foundation (NSF). The PIE-LTER conducts basic science and provides information to coastal managers to help them make more informed decisions.
"This is a landmark study addressing the drivers of change in productive salt marsh ecosystems, and a stellar example of the value of supporting LTER sites," says David Garrison, program director in NSF's Division of Ocean Sciences, which supports the LTER program along with NSF's Division of Environmental Biology.
In the next phase of research, the scientists will study the recovery of the nutrient-enriched marsh. “After we stop adding the nitrogen, how long does it take the system to rebound to its natural state?” Deegan asks. This information will be important in reclaiming the health of salt marshes that are currently suffering from nutrient enrichment.
In addition to Deegan, Johnson, and Bruce J. Peterson of the MBL, co-authors of this study in Nature include: R. Scott Warren of Connecticut College; John W. Fleeger of Louisiana State University; Sergio Fagherazzi of Boston University; and Wilfred M. Wollheim of the University of New Hampshire.
Deegan LA, Johnson DS, Warren RS, Peterson BJ, Fleeger JW, Fagherazzi S, and Wolheim WM (18 Oct 2012) “Coastal Eutrophication as a Driver of Salt Marsh Loss” Nature. |
At this point you should know how to do computer controlled measurements. However, the computer you use is probably connected to a network, and that connection allows for some interesting possibilities. In particular, you can take measurements and do control remotely. However, there are a few topics you should be conversant with before you try that.
Why learn about Basic Network Concepts?
Using computer measurement and control across a network allows for possibilities of operation in remote or otherwise inaccessible locations, and it allows for measurement and control of multiple locations from a single location. To take advantage of those possibilities, you need to have a basic familiarity with networked computers. You need to learn about basic concepts of network addressing and how to determine addresses. When you are finished with this unit you should be able to use a program (LabVIEW) to perform measurements and control across a network and you will learn about URLs, and IP addresses, and how to determine them. In addition, you will learn about some basic network concepts (servers, etc.)
Some Basic Network Concepts
Let’s start with what happens when you “go to” www.SomeCompany.com to get information about their products. Actually, you don’t go anywhere, but you do send some information across the network. The information you send does the following.
First, the URL (www.SomeCompany.com) gets sent over the network to a special computer – a name server – that translates this URL into an address of the form below. This form (all numbers) is the numeric IP address. www.xxx.yyy.zzz
Next, computers on the network (routers, etc.) try to send the message along so that it will get to the right computer – i.e. the one that has the IP address you are sending to. (Routers route messages along the network, that’s why they are called routers. When the message gets to the correct computer – the server, wherever in the world it might be – the server sends the file – often written in HTML – back to your computer – the client.
This is an over-simplified picture of what goes on, but it contains all the basic ideas about what happens. There are several points that you should note in this sequence of events.
The message you – the client – send to the server has to have the server’s address, otherwise the message will not get to the correct server. The message you send to the server must also contain the address of your computer, otherwise the information that the server sends out on the network will not make it back to you. The message you send to the server will also include a command. The command to get an HTML file is GET. When you are in a browser and you send a request for a file, you send a GET command along with the name of the file you want to GET. The message you send to the server may not go out as a single message. It may be broken into packets, and each packet needs to contain enough information that the complete information request can be reassembled by the server. The information sent back by the server may not arrive as a single message. It will probably be broken into packets, and each packet needs to contain enough information to permit your computer to reassemble the complete file/set of information sent by the server. In the above process when packets are sent over the network, there are no guarantees that they will arrive in the correct order, and computers on either end – both the client and the server – have to have the capability of reassembling all of the information. In the case of the client, you will often want that information displayed as a web page.
That is a short summary of what takes place in a typical client-server situation.
Now, you can examine a simulator that shows how a web page is loaded. Click here to get to the simulator.
Domain Name Servers : When you send a request for a web page to a URL (www.SomeCompany.com, for example) that information gets translated into an IP address (www.xxx.yyy.zzz) by a Domain Name Server (DNS). The DNS system has a vast database that contains all of the URL-IP pairs. It changes constantly, and it is probably the most highly accessed database on the planet. |
A fisher, sometimes referred to as a fisher cat, is a large, carnivorous mammal found in North America. The mostly arboreal animal is characterized by a long body, dark fur and a long, bushy tail. Fishers are slender and agile and are often found in coniferous or mixed forests.Continue Reading
Fishers are related to minks, ferrets, wolverines and badgers, to name a few. Found across the northern hemisphere, they usually weigh between 4 and 15 pounds during adulthood and grow between 29 and 47 inches in length, including the tail. Their dark fur does not change color with the seasons, although some fishers have a patch of cream-colored fur on their chests. With five toes on each foot, each with retractable claws, fishers are adept at grasping limbs and climbing trees.
Known to avoid open spaces, the fisher is also a solitary hunter. Its prey consists mainly of hares, rabbits, squirrels and mice, although they are known to sometimes go after domesticated animals. Over the last 200 years, fisher populations have often declined, mostly because of trapping and loss of habitat. However, recently they have been reintroduced into a number of states, including Pennsylvania and West Virginia, where they also help with controlling the porcupine population.Learn more about Cats |
Secondary teachers and their students will further explore permafrost, climate change and scientific processes used by scientists in six online interactive lessons. Lessons simultaneously deliver new scientific content and act as tutorials to train teachers to use free classroom applications, Google Earth, ImageJ, and NASA’s innovative GIOVANNI, to graphically visualize climate data and measure the effects of climate change.
Lessons include videos, photographs, diagrams, technology tutorials and clear step-by-step instructions. Each lesson provides a context for why studying each topic is important. Lesson resources include National and Alaska State Standards, additional information, links and lesson references. Lessons were pilot and field tested by teachers and reviewed by permafrost researchers.
Lessons allow learners to mimic how scientist conduct climate research.
Lesson 1 - Permafrost in the Arctic
In this lesson observe the location of permafrost.
Lesson 2 - Temperature Models and Ice Cellars
In this lesson use a NASA Goddard Earth Sciences and Information Data Center tool
examine thawing ice cellars by modeling soil temperature.
Lesson 3 - Graphing Long Term Soil Temperature Change
In this lesson graph long-term temperature changes in village communities in Alaska.
Lesson 4 - Graphing Albedo and Temperature Data
In this lesson create graphs depicting temperature change due to changing albedo.
Lesson 5 - Measuring Changing Lakes
In this lesson measure lake extent using time-series aerial photography.
Lesson 6 - Is Alaska’s Coast Disappearing?
In this lesson study changes in landscape by comparing a time-series of NASA satellite images. |
College Algebra & Trigonometry
posted by Lira .
A weight is attached to an elastic spring that is suspended from a ceiling. If the weight is pulled 1 inch below its rest position and released its displacement in inches after t seconds is given by x(t)=2cos(5ðt+ð/3). Find the first two times for which the displacement s 1.5 inches.
what's the problem? You have the formula:
x(t) = 2cos(5pi t + pi/3)
2cos(5pi t + pi/3) = 3/2
cos(5pi t + pi/3) = 3/4
cos .722 = .75
5pi t + pi/3 = .722 or 2pi-.722=5.560
t = (.722-pi/3)/5pi = -.021 or 6.262
t = (5.560-pi/3)/5pi = .287
see the graph at
You need to find the next occurrence after 0.287 by noting that the period is 0.4 |
Orange-fleshed sweet potatoes may be one of nature's unsurpassed sources of beta-carotene. Several recent studies have shown the superior ability of sweet potatoes to raise our blood levels of vitamin A. This benefit may be particularly true for children. In several studies from Africa, sweet potatoes were found to contain between 100-1,600 micrograms (RAE) of vitamin A in every 3.5 ounces-enough, on average, to meet 35% of all vitamin A needs, and in many cases enough to meet over 90% of vitamin A needs (from this single food alone).
Sweet potatoes contain high levels of antioxidant nutrients, anti-inflammatory nutrients, and blood sugar-regulating nutrients. They are packed with vitamins A (in the form of beta-carotene) , B6 (pyridoxine), and C. They have plenty of manganese, copper, potassium, iron and dietary fiber, together with complex carbohydrates. But sweet potatoes are low in calories and fat-free.
Despite its name, sweet potatoes help to stabilize blood sugar levels and to lower insulin resistance. Diabetics should eat more sweet potatoes.
Sweet potatoes are not always orange-fleshed on the inside but can also be a spectacular purple color. Sometimes it's impossible to tell from the skin of sweet potato just how rich in purple tones its inside will be. That's because scientists have now identified the exact genes in sweet potatoes (IbMYB1 and IbMYB2) that get activated to produce the purple anthocyanin pigments responsible for the rich purple tones of the flesh. The purple-fleshed sweet potato anthocyanins-primarily peonidins and cyanidins-have important antioxidant properties and anti-inflammatory properties. Particularly when passing through our digestive tract, they may be able to lower the potential health risk posed by heavy metals and oxygen radicals.
Yet beta-carotene only begins to tell the story of sweet potato antioxidants. Particularly in purple-fleshed sweet potato, antioxidant anthocyanin pigments are abundant. Cyanidins and peonidins are concentrated in the starchy core of part of purple-fleshed sweet potatoes, and these antioxidant nutrients may be even more concentrated in the flesh than in the skin. That's sweet potatoes have genes (IbMYB1 and IbMYB2) that are specialized for the production of anthocyanin pigments in the fleshy part of the tuber. Ordinary, we have to rely on the skins of foods for this same level of anthocyanin antioxidants. But not in the case of sweet potatoes! Extracts from the highly pigmented and colorful purple-fleshed and purple-skinned sweet potatoes have been shown in research studies to increased the activity of two key antioxidant enzymes-copper/zinc superoxide dismutase (Cu/Zn-SOD) and catalase (CAT).
Recent research has shown that particularly when passing through our digestive tract, sweet potato cyanidins and peonidins and other color-related phytonutrients may be able to lower the potential health risk posed by heavy metals and oxygen radicals. That risk reduction is important not only for individuals at risk of digestive tract problems like irritable bowel syndrome or ulcerative colitis but for all persons wanting to reduce the potential risk posed by heavy metal residues (like mercury or cadmium or arsenic) in their diet.
Storage proteins in sweet potato also have important antioxidant properties. These storage proteins-called sporamins-get produced by sweet potato plants whenever the plants are subjected to physical damage. Their ability to help the plants heal from this damage is significantly related to their role as antioxidants. Especially when sweet potato is being digested inside of our gastrointestinal tract, we may get some of these same antioxidant benefits.
Anti-Inflammatory Nutrients in Sweet Potato
Anthocyanin and other color-related pigments in sweet potato are equally valuable for their anti-inflammatory health benefits. In the case of inflammation, scientists understand even more about the amazing properties of this tuber. In animal studies, activation of nuclear factor-kappa B (NF-κB); activation of inducible nitric oxide synthase (iNOS), and cyclooxygenase-2 (COX-2); and formation of malondialdehyde (MDA) have all be shown to get reduced following consumption of either sweet potato or its color-containing extracts. Since each of these events can play a key role in the development of unwanted inflammation, their reduction by sweet potato phytonutrients marks a clear role for this food in inflammation-related health problems. In animal studies, reduced inflammation following sweet potato consumption has been shown in brain tissue and nerve tissue throughout the body.
What's equally fascinating about color-related sweet potato phytonutrients is their impact on fibrinogen. Fibrinogen is one of the key glycoproteins in the body that is required for successful blood clotting. With the help of a coagulation factor called thrombin, fibronogen gets converted into fibrin during the blood clotting process. Balanced amounts of fibrinogen, thrombin and fibrin are a key part of the body's health and its ability to close off wounds and stop loss of blood. However, excess amounts of these clotting-related molecules may sometimes pose a health risk. For example, excess presence of fibrinogen and fibrin can trigger unwanted secretion of pro-inflammatory molecules (including cytokines and chemokines). In animal studies, too much fibrin in the central nervous system has been associated with breakdown of the myelin sheath that surrounds the nerves and allows them to conduct electrical signals properly. If fibrin excess can trigger unwanted inflammation in nerve tissue and increase breakdown of the myelin wrapping the nerve cells (a process that is usually referred to as demyelination), health problems like multiple sclerosis (in which there is breakdown of the myelin nerve sheath) may be lessened through reduction of excess fibrinogen and/or fibrin. In preliminary animal studies, intake of sweet potato color extracts have been shown to accomplish exactly those results: reduction of inflammation, and simultaneous reduction of fibronogen levels. We look forward to exciting new research in this area of sweet potato's anti-inflammatory benefits.
Recent research has shown that extracts from sweet potatoes can significantly increase blood levels of adiponectin in persons with type 2 diabetes. Adiponectin is a protein hormone produced by our fat cells, and it serves as an important modifier of insulin metabolism. Persons with poorly-regulated insulin metabolism and insulin insensitivity tend to have lower levels of adiponectin, and persons with healthier insulin metabolism tend to have higher levels. While more research on much larger groups of individuals to further evaluate and confirm these blood sugar regulating benefits, this area of health research is an especially exciting one for anyone who loves sweet potatoes.
Take note of the purple-fleshed sweet potatoes. These purple sweet potatoes are purple in color due to the presence of a powerful antioxidant called anthocyanin.
Antioxidants are present in fruits and vegetables, and they help prevent diseases relating to cardiovascular problems and cancer. They also strengthen the immune system, are anti-inflammatory, and keep bones and skin healthy. The most powerful antioxidants are called phytochemicals, and the two very potent of these chemical compounds are beta-carotene and anthocyanin. Anthocyanins are flavenoid compounds which produce the purplish pigmentation in the purple sweet potatoes.
Two strands of anthocyanin, called cyanidin and peonidin, are powerful antioxidants which slow down the growth of cancerous cells, and are used to treat colon cancer. Research has shown that cyanidins and peonidins when passing through the digestive tract, may be able to reduce damage caused by heavy metals and oxygen radicals.
Sweet potatoes have storage proteins called sporamins which help the potatoes to heal its damaged parts. These are also antioxidants which are beneficial to our gastrointestinal tract. Another lesser known nutrient group of the sweet potatoes are the resin glycosides, which have antibacterial and antifungal properties.
How to eat sweet potatoes ?
The best healthy method to eat sweet potatoes is by steaming them whole with the skin intact. They should be ready for consumption within 7 minutes of steaming. As the skin also contains rich nutrients, you can also eat it with the flesh. If you don't want to eat the skin, it can be easily peeled off after the sweet potatoes are cooked.
By the way, the leaves of the sweet potatoes are also edible. They are nutritious and delicious. We usually stir fry them with dried prawns and chillies. |
Chronic renal failure (CRF)—is now called chronic kidney disease—is the gradual loss of the kidneys' ability to filter waste and fluids from the blood. Chronic kidney disease can range from mild dysfunction to severe kidney failure Figure 01. The kidneys serve as the body's natural filtration system, removing waste products and fluids from the bloodstream and excreting them in the urine. The kidneys maintain the body's salt and water balance, which is important for regulating blood pressure. When the kidneys are damaged by disease or inherited disorders, they no longer function properly, and lose their ability to remove fluids and waste from the bloodstream. Fluid and waste products building up in the body can cause many complications. Most systems in the body, including the respiratory, circulatory, and digestive systems, are adversely affected by chronic kidney disease (CKD).
Figure 01. Anatomy of the renal system
Kidney disease can exist without symptoms for many years. Renal failure progresses so gradually that CKD may not be detected until the kidneys are functioning at less than 25% of their normal capacity.
CKD occurs in 1 of every 5,000 people. Chronic kidney disease usually occurs in middle-aged and older people, although children and pregnant women are also susceptible. Chronic kidney disease can lead to total kidney failure, also known as end-stage renal disease (ESRD). People with ESRD require either dialysis or a kidney transplant. If not properly managed, ESRD is fatal.
Underlying disease is usually responsible for CKD Table 01. Diseases leading to kidney damage may be confined to the kidney, as in kidney infections, or may affect multiple organs, as in hypertension or diabetes. Approximately 40% of CKD patients have the disease as a result of diabetes, 30% have it as a result of hypertension, and 10% have it as a result of a disease called glomerulonephritis. Glomerulonephritis is a kidney disease that causes decreased output of urine, the spilling of blood and protein into the urine, and body swelling.
Diabetes mellitus is the most common cause of CKD. Diabetes, a disease that disrupts the way the body uses blood sugar (glucose), can lead to kidney damage and CKD. The high levels of sugar damage the kidneys over several years, and results in a reduced ability to filter blood and excrete waste products in the urine.
High blood pressure that is ignored or inadequately treated for many years can lead to CKD Figure 02. Hypertension, or high blood pressure, is a disorder that leads to damage of small blood vessels. When small blood vessels in the kidneys that filter the blood are damaged, kidney failure results. For this reason, it is important to keep blood pressure under control with medications, if necessary.
Figure 02. Blood pressure categories
CKD can result from a chronic kidney disease called glomerulonephritis, or from kidney infections. Glomerulonephritis may cause a small output of urine, the spilling of blood and protein into the urine, and body swelling. Glomerulonephritis may have no symptoms for many years, but may eventually cause enough damage to the kidneys to lead to CKD. Long-term or repeated kidney infections can also damage the structure of the kidneys, reducing the kidney's capacity to filter blood.
Kidney stones and other blockages can lead to CKD. Any obstruction in the natural flow of urine causes a back-flow of pressure in the kidney, which can damage the kidney's functional units, the nephrons. Nephrons are tiny tubular structures in the kidney that filter the blood. Each kidney has millions of nephrons. This damage can occur slowly over several years, and can ultimately lead to CRF.
Over-the-counter and prescription medications can contribute to CKD. Several drugs cause damage to the kidneys, including over-the-counter pain medications and certain very powerful antibiotics. If taken regularly over long periods, these medications act like poisons to the kidneys. People with even mild kidney disease must be very careful about the prescription drugs and non-prescription drugs they use. If you have known kidney disease, you should discuss all medication usage with your doctor.
Other diseases and conditions may lead to CKD as part of their natural progression. These include Alport syndrome, which is a rare kidney disease that causes kidney failure and hearing loss; lupus erythematosus; connective tissue diseases; kidney cancer; liver disease (cirrhosis); polycystic kidney disease; and abnormalities present at or before birth (congenital abnormalities).
Table 1. Causes of Chronic Kidney Disease
Hypertension Diabetes Glomerulonephritis Chronic kidney infections Obstruction of the urinary path (kidney stones) Medications Inherited kidney diseases Other medical conditions (lupus, cirrhosis)
- Common Side Effects of AntidepressantsFind out about common and not-so-common side effects of antidepressants and how to manage them.
- How Drugs Can Lower CholesterolDiscover how cholesterol-lowering medications work in your body to bring your cholesterol numbers down to ideal levels.
- Do Over-the-Counter Proton-Pump Inhibitors Work?You might wonder why you need a prescription for GERD if many PPIs are available over the counter. Get the answers to this and other questions about OTC PPIs. |
Drought conditions that have gripped many parts of the country could increase the potential for rising nitrate levels in forages.
That means producers need to take extra care to test corn they feed to their livestock to ensure that nitrate levels aren't at levels high enough to sicken or kill the animal, said Bruce Clevenger, an Ohio State University Extension educator.
Drought stress increases nitrate in forages because plants are unable to go through normal photosynthesis, Clevenger said. Under normal growing conditions, nitrate is quickly converted to nitrite, then to ammonia, and finally into plant proteins and other compounds. But when plant growth is slowed or stopped, nitrate can accumulate in the plant. Samples testing less than 0.44 percent nitrate on a dry basis is considered safe to feed.
Producers who find elevated nitrate levels in their fields may be able to take steps that would allow them to still be able to use the corn for feed, Clevenger said. Hay, straw, corn silage with lower nitrate levels, and byproducts can be used to dilute the feed so nitrate levels are below the toxic level in the livestock feed ration.
Reprinted in part from Farm and Dairy |
A bird's dropping reflect its state of health. Therefore, it is a good idea to pay close attention to them. A bird's digestive, urinary and reproductive tracts empty into a common receptacle called the cloaca and the products from them are expelled through the vent, which is the opening at the bird's "south end". A normal dropping may contain excretory products from the intestinal tract, urinary tract or both. The fecal (stool) portion of the dropping should be green or brown. The color is influenced by the bird's diet. Normal droppings are formed into a coil, reflecting the size and diameter of the intestine. Along with the fecal portion is a variable amount of uric acid or urate ("whitewash") and urine "water"). The urates are usually in a blob mixed in with the feces and should be white or beige.
The urine portion soaks the papers on the cage bottom for a variable distance beyond the perimeter of the dropping. It is important to regularly observe the amount of urine being excreted in the droppings. For this reason, such material as crushed corn cobs or almond shells should not be used on the cage bottom. It is impossible to evaluate each dropping when these materials cover the cage bottom. They also tend to promote rapid growth of disease-causing fungi on the cage bottom, especially when wet with urine or water. Newspaper or paper towels are preferable. Smaller caged birds tend to have an individual blob of fecal material with an accompanying amount of urate. The amount of urine excreted is usually quite small.
Bird Diarrhea: A bird has diarrhea when the fecal portion of the dropping lacks form ("pea soup"). Diarrhea is not very common in birds. A dropping with a normal fecal portion but a large amount of urine around it represents a watery drooping , not diarrhea! All diarrheic droppings appear loose, but not all loose or watery droppings constitute diarrhea. This is a very important distinction. Polyuric droppings may indicate disease (diabetes or kidney disease), but more often they result from increased water consumption or consumption of large amounts of fleshy fruits and vegetables. The color, consistency and amount of each component of the droppings of normal caged birds frequently change, depending on the type of food consumed, amount of water consumed, anount of stress experienced, mood changes and other factors. Abnormal droppings typically remain abnormal in appearance during the entire course of a bird's illness. |
Meningitis is an inflammation of the membranes(meninges) surrounding your brain and spinal cord. There are various causes of Meningitis commonly bacterial and viral in nature and rarely fungal infection. Bacteria that enter the blood stream and travel to the brain and spinal cord cause acute bacterial meningitis. But it can also occur when bacteria directly invade he meninges. This may be caused by an ear or sinus infection, a skull fracture or rarely after some surgeries.
Viral Meningitis is usually mild and often clear on its own. Most cases are caused by enteroviruses and others such as HIV, Herpes simplex and mumps might be implicated. Fungal meningitis is relatively uncommon and causes chronic meningitis. Cryptococcal meningitis is the most common type of fungal meningitis affecting immunocompromised individuals, especially people with HIV/AIDS. It’s life threatening if not treated with antifungals.
Risk factors for meningitis include people with compromised immune system such as AIDS, diabetes, alcoholism. Having your spleen removed also increase your risk of getting meningitis, exterms of age seems to play a role in your risk of developing meningitis as well, the younger and elderly are at high risk of developing meningitis. Living in dormitories, personnel on military base or boarding school facilities are associated with an increased risk of meningococcal meningitis. Signs and symptoms of meningitis varies between the paediatric and adult population. Adult often present with sudden fever, stiff neck and severe headache, nausea and vomiting does occur as well, confusion, difficulty concentrating and sensitivity to light (photophobia). Newborns and infants present with high fever, constant crying, excessive sleepiness or irritability. Inactivity or sluggishness, poor feeding. A bulge in the soft spot of a baby’s head (fontanel). Stiffness in a baby’s body and neck.
Meningitis complications can be severe. The longer you or your child has the disease without treatment, the greater the risk of seizures and permanent neurological damage, including hearing loss, memory difficulty, learning disabilities, brain damage and eventually death.
These steps can help prevent meningitis. Careful hand-washing helps prevent the spread of the germs. Maintain your immune system by getting enough rest, exercising regularly and eating a healthy diet with plenty of fresh fruits, vegetables and whole grains. Some forms of bacterial meningitis are preventable with vaccinations. Your family doctor or paediatrician can diagnose meningitis based on medical history, a physical exam and certain diagnostic test. The following diagnostic test are useful in diagnosing meningitis, lumbar puncture in which a spinal tap is done to collect cerebrospinal fluid (CSF) for further analysis. Blood culture is done often time to grow microorganisms particularly bacteria. Computerized tomography (CT) or magnetic resonance imaging (MRI) scans of the head may show swelling or inflammation.
Seek immediate medical care if you or someone in your family has symptoms such as fever, sever unrelenting headache, confusion, vomiting or stiff neck. The treatment depends on the type of meningitis. Acute bacterial meningitis must be treated immediately with intravenous antibiotic and sometimes corticosteroids. This helps to ensure recovery and reduce the risk of brain swelling and seizures. Viral meningitis often time improve on their own in several weeks. Bed rest, plenty of fluids and pain medication is often time sufficient in mild cases. Antifungal medication treats fungal meningitis.
Dr. Makemba Shayela Nelson – MBChB – University of Kwazulu-Natal, Durban, South Africa. Nesha Medical Practice. |
Graphics are mostly created on the 2D plane, but in some cases, we need 3D graphs. In this article, we will look at how to create 3D graphs with Python matplotlib.
Those who are already familiar with data visualization will easily understand the structure and logic of 3D graphs, but if you don’t have a background, read this article.
Introduction to 3D Graphs
We have 3D graphics to fill the parts where 2D graphics are not enough, all kinds of graphics available in matplotlib also have 3D versions.
3D graphics may seem more complex at first than 2D graphics, but once you learn the logic, both types of graphics are exactly the same, the only difference is height (z axis).
Let’s examine the graph above, the variables in the graph: time period, average temp, and seasons. We cannot use 2D graphics here, if we do, 1 variable will be idle.
In cases where we have 3 such variables, 3D graphics are used when we want to visualize the height. Now let’s get to practice.
Creating 3D Figure
To visualize the data, we need to create a 3D container, we will follow the same steps as creating a 2D figure, the only difference is to specify that we will apply a 3D graphic to the projection parameter.
import matplotlib.pyplot as plt ax = plt.axes(projection = "3d")
Perspective can be a problem for beginners. To prevent this, let’s examine how a single piece of data stands on the graph before big data.
fig = plt.figure(figsize = (4,4)) # Create point (1,3,4) on figure ax.scatter(1 , 3 , 2)
There are only x and y-axis on the 2D graph, so we have horizontal and vertical but here, we have z-axis, so we can assign a value to it separately and place the point accordingly.
3D Line Graphs
We learned the basics of 3-dimensional graphics, now we’ll learn graph types in 3D. We will examine line, bar, and finally, scatter charts.
# Required library import numpy as np # Resizing Figure fig = plt.figure(figsize=(8,6)) # Creating Data x = np.linspace(0 , 20 , 100) y = np.sin(x) z = np.cos(x) # 3D Graphic ax = plt.axes(projection = "3d") # 3D line plot ax.plot3D(x, y, z)
3D Bar Graphs
Usually, 3D graphs are created in the type of bar graph, 3-variable values are often well visualized with bar graphs.
import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = plt.axes(projection="3d") x = range(10) y = [5,4,1,2,6,8,7,10,9,4] z = np.zeros(10) dx = np.ones(10) dy = np.ones(10) dz = range(10) ax.bar3d(x,y,z,dx,dy,dz) ax.set_xlabel("x axis") ax.set_ylabel("y axis") ax.set_zlabel("z axis")
We’ll stay long on this subject because learning concepts such as delta x, delta y, delta z is the real trick here. Let’s start examining the graph from different perspectives and experiment with the code.
Fully Understanding 3D Bar Graphs
In this sub-title, I will try to explain exactly how 3D bar graphs work. After this chapter, I hope you will fully understand 3D bar graphs.
Delta x, Delta y, and Delta z increase values are shown here. We said that the bar should not increase on the x and y axes, but we wanted it to increase by ones on the z-axis, if we rotate the perspective in x and y ways, we can see the increase in the z-axis.
While no increase is observed in the x and y axes, an increase of 1 by 1 is observed in the z-axis. Now let’s explain the X, Y, and Z values.
X and Y values already vary on the graph. It can be seen easily with the first graph, but what happens when you increase the Z-axis, let’s see with the graph below.
# The graph is exactly the same as the code above, only the data on the z-axis last value has been increased 2. z = [1,1,1,1,1,1,1,1,1,3]
As you can see, the last bar started from 3 instead of 0. As you change the data on the z-axis, each bar will be affected and have height.
I hope I have explained the 3D bar graph logic well, now let’s move on to 3D scatter graphs.
3D Scatter Graphs
2D Scatter plots are exactly the same, they don’t require any different parameters, just specify x – y and z values.
# For Generating Random Number import random as rm import matplotlib.pyplot as plt ax = plt.axes(projection='3d') #Create 99 random numbers from 0 to 100 x = rm.sample(range(0, 100), 99) y = rm.sample(range(0, 100), 99) z = rm.sample(range(0, 100), 99) ax.scatter(x , y , z)
As you can see, the data in the 2D scatter plot are given height only, they are the same in structure, thanks a lot for reading! |
There are a number of perennial problems in the study of emotions - causing recurrent discussion, divergent theories, and stimulating considerable research. In many respects, these problems define the psychology of emotion. Today I will discuss these problems as follows:
- How we define the task of the psychology of emotions;
- How we define an emotion;
- How we distinguish different emotions and the elicitors of emotion;
- Defining the boundaries of emotion;
- The relationship between emotion and motivation;
- The nature-nurture debate and emotion;
- The relation between emotion and reason; and
- The functions of emotion.
- Defining the Field of Emotion Study:
What is an emotion? Ultimately, certain phenomena have imposed themselves, requiring a designation and an explanation in this respect. These include feelings, shifts in the control of behavior and thought, involuntary and impulsive behaviors, the emergence and tenacity of beliefs, changes in the individual=s relationship with the environment, and physiological changes not caused by physical conditions.
Feeling is a striking phenomenon, a different type of experience from others. Yet, this was not the only thing motivating early discussion of the concept of emotion, perhaps not even the most prominent. For example, there was the fact that salient events intrude upon - and interrupt - goal directed behavior and thought. They may also elicit unplanned behavior and thought, Aaffecting the person. Terms like pathema (Greek), affectus (Latin), and passion (French and English) all indicate some sort of passivity and control of behavior that is contrasted with action. Such intrusions often were extended to desires, thoughts, plans, and behaviors that persist over time, and may lead to performing actions regardless of costs, obstacles or moral objections.
Yet another phenomenon that comes up is that the individual=s relationship with the environment and other people often changes - leading the person to draw back, turn away or approach with eagerness. In many cases such changes appear due to the meaning of some aspect of the environment rather than to its physical characteristics.
A third phenomenon consists of recurrent patterns of behavior (e.g. smiling, laughter, weeping or violent outbursts) which frequently accompany changes in relationship, and appear predictive of future behavior (e.g. smiling = friendly conduct; angry outbursts not).
Finally, there are the phenomenon of bodily upset, along with disorganized behavior and thought. These are what led Descartes to the term Aemotion itself (derived from a French word meaning Ariot or Aunruliness).
All four of these phenomenon require explanations from Awithin the person. For psychologists, they demand hypotheses about possible causal factors. Whether taken as feeling or just as some inner state or process, emotion fulfills the function of rendering the phenomena intelligible and their consequences more predictable.
For example, the notion of emotion fulfils the role of explaining discrepancies of various sorts. For example, different people react differently in the same situation, and the same person may react differently to similar situations on different occasions. People hold tenuous beliefs in the face of contrary evidence, and sometimes act differently than they say they will. Emotions allow us to hypothesize reasons for such behaviors.
Ultimately, then, the sources for the concept of emotion include a variety of phenomena:
- shifts in the control of behavior and thought
- involuntary and impulsive behaviors (including expression)
- the emergence or tenacity of beliefs
- changes in the relationship with the environment
- physiological changes not caused by physical conditions
All of these usually occur in response to external events, the person=s actions or thoughts. They usually have appreciable consequences for the person=s goals or conduct. They also tend to occur in conjunction with each other, leading to the assumption of emotions by the individual. These are what the psychology of emotion tries to deal with.
- The Task of the Psychology of Emotion:
The task of psychology is to analyze these states and to explain them at the level of the individual. Psychology seeks explanations of emotion in terms of cognitive, motor, and processes that are attributes of individuals, together with their capacities for goal setting and planning, their attentional and energy resources, and the like. These include the various kinds of information that such processes have to work with and that are stored within the individual (e.g. innate sensitivities, stored facts, cognitive schemas, habits, etc.)
The psychology of emotions also considers the individual=s dynamic interactions with the environment. These bring in sensory stimuli and how they are taken, the effects of the environment on how well these are perceived, effects of the individual=s actions on the environment and their feedback, changes over time in both the environment and the individual, and the individual=s anticipations concerning all of these.
Psychological explanations are thus composed of three terms: (1) the structure of the individual; (2) stored information; and (3) dynamic interaction with the environment. How emotional phenomena emerge from what corresponds to these three terms raises several problems. For example, there are many ways that one might emphasize one or another, and the fact that each plays a role gives little guidance as to their explanatory weighting. One the one hand, assumptions of basic emotions or innate, prepared stimulus sensitivities weigh heavily in favor of (1) and (2) as opposed to (3). Conversely, the hypothesis that all emotions are variants of very general affect and arousal mechanisms, or result from a general sensitivity to goal interruption, emphasize (3) over (1) and (2). In one type of explanation the emphasis is on complex structure; in the other a complex environment.
Theorizing generally tries to find an optimal balance between structure and adaptation to information. Without unnecessarily complicating things, what is considered an optimal balance depends on the empirical data, as well as on the investigator=s overall perspectives and taste. Within those aims, there are still important differences in the kinds of explanations being sought. One can seek explanations in terms of intentional aims, subject-object relations, and the meanings of events. One can also look for explanations at the psychological or functional level (e.g. mechanisms/ consequences). And one can look for them at the structural level (e.g. neurophysiological and biochemical processes). These various modes of explanation coexist in psychology and, in principle, are mutually compatible.
These different modes also leave room for quite different explanatory approaches. One may seek regularities or laws dealing with general relationships between variables (e.g. anger results from frustration). Alternatively, one may simply seek explanatory rules with a more limited scope subject to unspecified restrictions (e.g. these may be read into contexts).
- What is Aan Emotion?
Whatever type of explanation chosen, it is not fully obvious what the phenomena to be explained are. Observable phenomena can be described and analyzed in very different ways and at very different levels. Thus, the recurrent discussion and divergence of theories. Efforts to describe an emotion in the sense of a type (e.g. joy or anger) illustrate very clearly this problem of choosing one=s level of description. Some theoretical approaches may focus on one component (e.g. feeling or physiological arousal); others describe emotions as sets of components with a deterministic or probabilistic structure. Some may view emotions as states, others as processes ranging from appraisal to behavioral response. Jealousy, for instance, can be understood to refer to a particular feeling, or to the process that runs from the appraisal of a particular 3 person constellation as a threat, to feelings of anger or distress, and to the desire to do something about the threat.
An important difference concerns the level of conceptualization of emotions that is considered optimal. Emotions can be viewed primarily as intrapersonal states (e.g. feelings, states of arousal, or activation of motor patterns). They may also be viewed as interactive states involving the subject, an object, and their relationship. The former approach abandons or minimizes the intentional nature of emotional experience; the latter does the same for physiological factors.
Psychological attempts to define Aan emotion in particular instances runs into similar problems of definition and analysis. It matters, for example, how long one thinks emotions last, whether they are seen as fast emergency provisions, for example, or something more enduring. Similarly, it matters whether one sees emotions as transactions dealing with a particular issue, or something defined by a particular core relational theme or overall appraisal (e.g. loss or threat). Still other possibilities include describing them at the level of the prevailing mode of action readiness, or at the level of elementary emotional phenomena such as facial expressions or physical arousal.
When subjects themselves are asked to recall an emotional instance, they usually report an episode at the transactional level - ranging from 5 seconds to several days. During this episode, appraisals may change and different emotions co-occur or succeed each other. On the other hand, if Aan emotion is defined by the occurrence of a particular facial expression, then emotions last for 5 seconds at most.
Which level one selects as representing an emotion is largely arbitrary and should be no topic for disagreement. Emotion units as defined at higher levels are usually complexes made up of these more basic processes. As such, they form the building blocks or ingredients for any theory of emotions. Analyses at different levels are thus not necessarily incompatible with each other. However, care must be taken to make sure that the assumptions underlying analyses at different levels leave room for each other.
- What are Emotions?
The question remains whether the phenomena for which the word Aemotion is being used include a class of events with sufficient specificity and functional unity to justify a single concept. Moreover, how are these distinct from cognition or conation? Specificity and unity of Aemotion are commonly assumed, but this is not necessarily the case. With regard to the former, James assumed that emotional experience is no different from any other behavior called forth by key stimuli and emerging from the cerebral cortex. Landis and Hunt argued that there is nothing specific about emotional experience, and that it partakes of the nature of a judgment. Duffy simply subsumed emotion under an organism=s level of activation.
Similarly, one may deny the unity assumption. How do reactions involving goal directed action (e.g. anger) have anything in common with mere, reactive excitement? Here it is argued that the various emotions may not derive from shared mechanisms.
Ultimately, little agreement exists among psychologists about the features of emotions that might characterize unity and specificity. Indeed, while several rather specific features have been posited, these define overlapping but non-identical sets of phenomena (e.g. feelings of pleasure and pain cannot be readily reduced to bodily sensations or cognitive judgments). Yet, affect is often evaluative, indeed it introduces value to the world of fact. To explain the arousal of affect, then, one has to assume some process that turns a simple event into an evaluated event (e.g. appraisal). This may be automatic or involve active cognitive assessment of a stimuli. One influential view (Lazarus, 1991) has it that emotions are the results of appraising events as promoting or obstructing one=s well-being, concerns, motives, or current goals.
Other authors have given the central place to desire, or the impulse to act, implying assumptions of forms of action instigation and action control that are neither automatic, habitual or planned. These ideas are among the main reasons to consider emotions as Aaffecting the individual. Impulsive action instigation, in turn, requires assumptions about the psychological apparatus that are unnecessary in the explanation above. It sets emotion apart from cognition and conation. Emotions here are viewed as processes involving involuntary, non-habitual action control or Aaction readiness. Impulsive action instigation is conspicuous in certain reactions, such as desire, surprise or amazement, that a definition of emotions in terms of affect leaves out. These ideas thus delimit overlapping but non-identical domains.
Another matter to consider is the supposed involuntary nature of emotions: feelings traditionally are not seen as something produced by the individual, but reactions to selected stimuli. Current psychology is critical of this, and posits some form of agency in response. However, like Awill such concepts are not easy to fit into the cognitive science perspective. All the same, assigning, accepting or carrying responsibility does not just seem to be arbitrary, and this produces emotional and ethical implications.
What specifies and unifies emotional phenomena may not be one or the other of the various components, but a process that connects them. For example, one may reserve the word Aemotion for states of synchronization of the various components, or to occurrences of affect that produce a change in action readiness (all hunger is unpleasant, but would be considered an emotion only when it leads to restlessness and an urge to find food). Similarly, emotions can be restricted to the various response components or their patterns when elicited by cognitive appraisal. Such redefinitions meaningfully focus on those constellations of factors that involve some impact on the individual=s life or behavior - restricting the domain but making it more coherent.
Yet, just as there are arguments to restrict the domain of emotion, others urge psychologists to enlarge it. Some, for example, distinguish emotions, emotional attitudes and sentiments (being scared by a dog and being afraid of dogs). Emotions have a limited duration, but sentiments may persist over a lifetime. Nevertheless, both may have a similar structure (focused on an object, its appraisal, accompanied by a propensity to act). Both may affect one=s behavior (e.g. avoiding places where dogs are likely), and attitudes may turn into an emotional incidents at the slightest provocation. Thus, given these similarities, the argument is that they are but variants of the same thing, and may be placed in a single category.
All of this discussion involves the definition of emotion, its difficulties, the debates and divergences in emotion theory surrounding this issue. Yet, these are not merely unprofitable matters of academic taste, because whether a person has or does not have an emotion is a meaningful issue that is hard to avoid (e.g. Can they be faked? Are people responsible or not?). It is better to replace the question of whether or not a given state is an emotion by the more analytic question of which of the various components (appraisal, action readiness, control precedence) are
- or are not - involved.
- How are we to Distinguish Different Emotions?
What makes one emotion different from another has been a prominent research question, and has lead to a search for information that might account for this. Such sources can be found in any of the components or in their combinations. In the past attention was focused on supposedly irreducible qualities (e.g. patterns of physiological autonomic response; feeling states as defined by affect and state of activation). Over the last several decades has emphasized other possibilities such as states of action readiness and their awareness, overt or covert motor behavior, and felt patterns of appraisal. Distinctions also come from the type of eliciting event or core relational theme.
Which of the components should be preferred in making distinctions between emotions? The answer depends on the assumptions one makes about the relationship between the components. Three kinds stand out: (1) those that assert one component has causal priority over the others (e.g. physiology); (2) a view holding that there exist hypothetical dispositions that underlie all components together (e.g. basic emotions/ functional systems); (3) emotions as more or less unordered collections of components, activated in different combinations by different eliciting events and given various ecological, cultural or linguistic labels.
Several investigators have taken this third option, which seems better able to deal with cultural differences in emotion categories, as well as with differences in the precise semantic content of similar categories in different languages. It also deals with appreciable differences in the structure of given emotions that appear to exist within a culture. On the other hand, a basic emotions view is better at understanding uniformities of emotion across cultures, and may provide for differences by noting that the precise antecedents may vary (e.g. each component may have its own facilitating conditions in addition to being called up by a central emotion process).
Yet, all of the above assumes that these labels reflect structure among the phenomena in question. A different approach is possible: emotion labels may reflect prototypes or scripts of cultural origin that to some extent prescribe the phenomena. This social constructionist view argues that one behaves as the emotional script for a given circumstance demands. The strong form of this view is implausible given evidence suggesting a biological basis for emotions. However, it does point to one of the forces that might shape the patterns of phenomena, and the potentially formative role of emotion labels. Labels may not only reflect, they may signify the significance attached to them in the first place. This may signal a major entry point for processes of emotional regulation.
This multi-component nature of emotional phenomena reflects a looseness in structure that fits viewing emotion categories as fictions. The same can be said for distinctions between different categories of affective phenomena such as emotions, feelings, moods and sentiments. These reflect a deeper and more general issue: using substance concepts rather than function concepts to understand emotion: the former are static, reflecting states of things; the latter allow change, reflecting processes. In much work, emotions are treated as nouns, states or things - reflecting our language - but for psychological analysis at a functional level, it may be better to treat emotions as the varying phenomenal results of processes, reflecting verbs (e.g. Aone is joying). From this perspective, the very notions of emotion and of the different emotions may be abandoned. One can describe the various phenomena directly in terms of the processes and avoid needless discussions about categorical boundaries (e.g. mood vs. emotion) as processes are graded in strength, and making cuts at certain levels of strength is arbitrary. Replacing categories by processes may be extended to emotions themselves and even to their components (e.g. assemblies of separate facial expression components). These components can be defined functionally in terms of types of actions (e.g. attention) and linked to appraisal component processes.
Employing the process level rather than the category level turns the relationships between components into a subject for unprejudiced empirical research on a number of questions: which processes are linked with others? Which linkages are due to joint response to the same stimuli, and which others are associated? Similarly, this approach is relevant to the issue of which phenomena belong to emotion itself, and which are its antecedents or consequences? (e.g. is expression a consequence of emotion, or part of it?) Indeed, this latter issue seems to lose much of its sense when Aemotion is considered a collection of processes instead of a single, integrated entity. It adds further questions, such as how stimuli or thoughts determine particular processes.
An additional question that emerges out of thus reframing the issue is to what extent processes that logically follow the component processes act back upon them. Emotion processes are probably not linearly organized, and a nonlinear dynamic model may be more adequate (e.g. facial expressions may influence others= responses, which, in turn, may affect the subject=s original facial expression). Indeed, this may also account for internal feedback from the subject=s actual or anticipated response.
Finally, there is the issue of how solidly given emotional sub-processes follow each other. This really asks how strongly secondary conditions such as personality, mood, the state of the organism and coincidences in the physical and social situation determine the appearance of a particular response.
- What are the Relations between Emotion and Motivation?
The relations between emotion and motivation constitute another perennial problem for psychology. This is hardly surprising, as the term motivation has been as problematic as the term emotion. One can view motivation as a cause of emotion, as one of its major aspects, and as one of its consequences. Some have argued for an abandonment of the emotion-motivation distinction, but both notions can be kept apart by the distinction between dispositions and occurrent motivational states. In the former there is a tendency to readiness, in the latter there is action readiness that arouses behavior and drives it forth in response to urgency or an event promises its satisfaction (e.g. an upsurge of lust). Some have termed dispositional readiness instincts; others speak of emotions as the readouts of motivation; still others speak in cognitive terms of Agoals, with emotions the responses to their achievement or frustration.
Some have argued that emotions such as fear and lust, and motivations such as the desire to escape or possess, are related as causes and consequences, yet this separation has appeared artificial to other others. Again, the problem largely disappears when one conceives both domains in process terms. It changes into the question of under which conditions action readiness change does or does not depend on upon prior appraisal or feeling (e.g. is perhaps triggered directly by stimulus perception).
Finally, there is the question of whether every emotion involves some motivational change (e.g. joy and sadness do not necessarily have a motivational goal). Wider conceptions may seem needed to bring these into a common perspective with fear, anger, etc.
- What Elicits Emotions?
Emotions are generally regarded as being caused by external events or by thoughts, apart from physiological causes such as biochemical changes and neural discharges. Defined as responses to events, the question arises as to the nature of those events that are the antecedents to emotion. Can they be reduced to simple causal principles?
There have been several approaches to this question. One proposes that emotions are responses to certain unconditioned stimuli, while others may be evoked by conditioning. This classical behaviorist proposal only appeared to account for a fraction of what actually elicits emotions.
A second approach came from later behaviorism, considering emotions to be aroused not by particular stimuli, but by contingencies consisting of the actual or signaled arrival or termination of pleasant or unpleasant events. This has been augmented by a consideration of the subject=s coping resources in the face of such contingencies.
A third approach gives the subject-event interaction a still stronger role in three ways: (1) promotion or obstruction of the subject=s concerns; (2) how these may differ from one subject to another; and (3) a focus on how the subject has appraised the relevance of events to these concerns. Ultimately, in this approach, emotion arousal is viewed as depending on the individual=s cognitive or associated appraisal processes.
This focus on emotion arousal being determined by the meaning of events for the individual=s concerns has a long and distinguished intellectual history. Nevertheless, it has always encountered problems. The evidence for concerns often emerges only after the occurrence of emotions. Another problem is that people=s actions are often motivated by the goal of achieving pleasure and avoiding pain. Finally, the structure of people=s concerns is largely unclear here. Clarification is thus needed.
In addition, the various approaches noted here fail to account for the cognitive emotions of surprise and boredom. Perhaps it would be better to modify the concern-satisfaction view by arguing that many emotions result from meeting or thwarting expectancies.
The various approaches thus elaborated may not be mutually exclusive alternatives. Emotions may spring from many sources.
- Nature or Nurture?
Here we come to the question of how much of emotion can be seen to be the result of innate mechanisms and biological processes, and how much is the result of individual learning in the social environment.
That emotions have a biological basis is something that probably nobody contests. The evidence for neurological and neurochemical mechanisms is fairly compelling, but their precise nature remains unclear (e.g. do the limbic mechanisms control motivational states, impulses and action readiness or do they control integration of behavioral patterns/ affective sensitivity to particular stimuli?) In any case, the capacity for affect is rooted in the human constitution, since emotion cannot be functionally nor phenomenologically reduced to cognitions and judgments. The processes of appraisal themselves rest upon innate capabilities. Moreover, there are strong indications that there exist innate dispositions related to specific emotions, or at least to forms of action readiness such as satisfaction seeking, hostility and self-protection (e.g. neuropsychological findings, action patterns, facial expressions). Also, one can make a strong case for the universality or near universality of the contingencies that typically elicit those emotions, and the near universal lexical terms in different languages.
By itself, however, universality does not prove biological origin. Major emotions may correspond to universal contingencies or core themes such as threat, loss or success, but these may alternatively be seen as universal occasions for learning, contexts for universally similar problem solving, or dynamic compilations of action patterns (e.g. revenge may not be innate, but a response to the fact that harm is universally painful and people are - or may become - aware of common things that may modify the behavior of attackers, such as kicking, shouting, throwing things).There is thus more than one way to explain instances of universality.
Biological dispositions and cultural determinants are neither incompatible nor mutually exclusive. It may only be useful to stress that the role of cultural differences in emotional phenomena depends to an important degree on one=s level of analysis (e.g. shame differs in Western and Arabic societies, but both may represent the same sensitivity to social acceptance and the same motivation to correct/prevent deviations from norms). Universality may lurk behind cultural specificity without detracting from the specific meanings of each cultural form. Conversely, culture determines not only specifics, but also universals (e.g. sensitivity to social acceptance is also a cultural value). Symbolic capacities and social interactions penetrate every phenomenon.
Still, it is usually not very clear how biological dispositions and cultural determinants interact. It is also unclear how emotions that have an important cognitive component (e.g. regret) relate to biological mechanisms and basic emotion disposition. These have to be further worked out.
- Emotion and Reason:
The traditional contrast of emotion and reason is still very much with us. Reason was often associated with logic and rational solutions; emotion with confusion, being led astray, and behavior that one would later regret.
Such contrasts have been mitigated in modern theory. For example, the renewed emphasis on the role of cognition in emotions, the recognition of the Arationality of emotions (as aids to rational behavior), and the functional nature of emotional reactions themselves. Indeed, emotional behavior is often considered appropriate to the eliciting event as appraised by the person. Yet, contrasts between emotion and rationality remain. For example, affect can be aroused without a cognitive antecedent, to Aprepared stimuli, and to the conditioned stimuli in traumatic conditioning. Emotion, it is argued, does not always need inferences; nor do all cognitions that are relevant for well-being actually elicit or modify emotions (e.g. showing a person with arachnophobia that spiders are harmless rarely helps).
The irrationality of emotions is still there as well. That irrationality lurks in every emotion is suggested by the almost ubiquitous presence of emotion regulation and self-control. In such cases, rationality has an ally, built into the very emotion mechanisms, that serves selfinterest at many levels (i.e. is not there merely to satisfy social conventions).
Emotions can also be irrational, in the sense of producing suboptimal results. They may be harmful in the short or long run (e.g. people in panic get crushed in the rush; stage fright spoils performance; rage may lead to childish behavior and upset relationships).
It is true that one can always think of some function for any behavior (e.g. stage fright = a show of helplessness that invites the audience=s indulgence). Many emotions seem irrational only when the individual=s appraisals are neglected (even though these at time themselves may be irrational). Explaining irrationality in this fashion suggests an irrational conclusion. Instead we are stuck with the conclusion that, whatever their possible functions, the disturbance of optimal functioning by emotions is dysfunctional and irrational. These issues may be out of fashion, but must ultimately be dealt with.
- The Functions of Emotions:
The negative view of emotions dominated earlier theorizing in psychology, but nowadays emotions are being viewed as adaptively useful. Hence, the functional perspective now dominates. This is plausible because of biological data and evolutionary explanations. It is also so because the range of possible functions appears wider than only dealing with opportunities and threats that the individual faces (e.g. joy may serve readiness for new exploits, assist in recovery from previous stress, and invite others to participate; shame and guilt are powerful regulators of social interaction).
One has to be careful with functional interpretations because they exist in two varieties that are not always kept distinct. There are evolutionary and proximal functions. Emotions may have been functional in dealing with the contingencies that made them come into existence in evolution (e.g. sex serves the survival of the species). However, emotions may also be functional for what they accomplish once they are there (e.g. sex for pleasure and intimacy). Many emotions are functional in the latter sense, as contributing to social bonding between oneself and others and sources of human interest (e.g. guilt and grief).
The evolutionary perspective almost obliges one to see emotions as functional provisions, and such hypotheses seem to come very easily these days (e.g. anger is innate as it protects one=s territory and offspring; apathy in grief saves energy). Yet, nobody was around during evolution to gauge these benefits against the corresponding costs of alternatives. Evolutionary hypotheses often resemble lazy thinking, failure to examine implications, or failure to consider alternative possibilities. Such possibilities include dynamic explanations where emotions develop on the spot as a result of their immediate material and social effects, and the notion that certain emotional phenomena may be chance offshoots of something quite different.
One may nevertheless grant that, overall, emotions are functional for adaptation. How can this be reconciled with their instances of irrationality and disturbance of optimal functioning? Some have tried to distinguish different types of emotion. Other explanations focus on limited resources for emotion regulation, exhaustion, or the fact that certain emotional predicaments are simply inescapable.
Many irrational for dysfunctional instances of emotion are due to a common feature of functioning: reactions that are in principle functional are being applied far beyond the contexts in which they are of use. Grief may be functional when it prompts, say, a child=s mother to return to the room, but may serve no purpose in bereavement (at least according to Frijda).
A further angle is that human intellectual and cultural development have outrun evolution. Emotions may have been adaptive for coping with the risks and opportunities of the savannah, and with the use of fists and stone tools. They may not be adaptive any more for dealing with present day interactions in our sophisticated, technological society. Present day anger and greed have become perversions because the emotion systems did not develop along with these cultural conditions. The psychology of emotions needs to examine this as well.
Will these 10 perennial problems in the psychology of emotion remain with us forever? Such problems are often not solved because they reflect particular world-views or limits in capacities for conceptualization. Perhaps the scope of these problems may be narrowed by achieving more insight into how their proposed solutions are related to each other.
Psychological explanations of emotional phenomena are sought at different levels. Answers to some questions may initially appear incompatible when in fact they are answers to different questions at different levels of the phenomena. They may actually complement each other.
Frijda argues, as well, that the study of emotion will be advanced when the processual model achieves more attention. Only the first efforts are being made to construct models of the processes of appraisal and the inner structure of goals, for example, and intentional phenomena should be clarified in terms of functionally defined processes. This will facilitate jumps between levels such as intention and neurophysiological processes.
All of this is important for advances in emotion research. There is no guarantee that categories of analysis at one level will project onto coherent categories at another level. Still, the relationships between explanations at different levels depend on each other, and it would be profitable if researchers in different areas and on different levels talked more to each other. For example, experimental investigators of emotions often know little about the social and cultural psychology of emotions and vice versa. This restricts the range of emotion elicitors considered. Similarly, students of the neuropsychology of emotion often know little about the contemporary psychology of emotion - frequently writing as if what causes emotion is an electric shock, and as the paradigm of motivation is Asurvival. To most psychological researchers, the limbic area is merely somewhere in the brain and the amygdala is an amorphous blob of tissue. There is no real reason why all this should remain this way. |
Helping Your Children Develop Memory Skills
By Andrew Loh
Teaching your children how to enhance memory skills is an art.
Different children have different memory skills and abilities to
remember things. A number of children will have very low memory
powers and such children need extra help from both teachers and
parents. However, making your children remember things out of sheer
memory power could be very exciting, challenging and thrilling. Here
are some well-known approaches that can help to improve your
children's memory skills:
Make it Simple and Easy for your Children to Memorize
It is impossible for your children to hold too many things at a
time. Hence, whatever you teach must be short and simple with no
complexities. Children find it very difficult to memorize very long
Example: Difficult to understand and remember sentence is "If
you complete reading those two pages and get ready for tomorrow's
exam, I will allow you to watch your favorite TV cartoon show"
Very simple and direct instruction could is "When you complete
reading those two pages, you can watch your cartoon"
Portion your Children's Work
Never ever, give your children lengthy portions to read. In other
words, when you give a lengthy work, like solving 50 math questions
at a time, your children will find it very difficult to memorize all
of them at a time. Instead, segregate the work into small portions
and ask your children to work on them in individual chunks. Let your
children provide you the feedback for each of those sections before
you give the next one.
The best way to teach how to improve memory skills in your children
is to hook different ideas to one important concept and later teach
by using the method of hooking and relating.
Example: Let us say you are teaching a lesson portion related
to history. It is easier for your children to remember the whole
lesson when you create a series of related events connected to the
main theme. Big ideas can be very meaningful for your children. It
is possible to streamline your children's memory skills by making
them to learn and understand the main theme of the story.
Adapt Lessons and Instructions
All children have their methods of learning and remembering them.
You may wish to learn and understand your children's learning style
before teaching memory-improving skills. Some children can pick up
instructions when their teachers read them out aloud, while others
need to look at written sheets to comprehend the meaning. Some
students need very slow paced instruction, while others can
understand them even at a fast pace.
Use the Basic Techniques of Mnemonics
Tools that help your help your children improve memory skills can
assist your children enhance their power of remembering. It is
possible to enhance working memory skills and remember specific
tasks by using these tools.
Mnemonics are the hidden clues of any types that assist your
children remember lessons and instructions. This is possible when
your children associate the available information they want to
remember with a meaningful visual image, a simple sentence or a
Some of the most common types of these devices are:
Visuals: Here, you will be representing and connecting
images and visuals with words and expressions. You may need to use
only positive images while highlighting things. Unpleasant and
negative images could be counterproductive. Let the image be
colorful, vibrant and interesting to your children. Example:
Use the image of a microphone to remember a name like Mike
Use Acronyms: Initials of sentences and words could
help your children remember about various things and objects.
Memory Enhancement Food
It is also possible to enhance memory skills by providing good
nutrition to your children. A diet based on fresh fruits,
vegetables, meat and whole grains can help develop brains in
children. Vitamins and minerals can also assist your children
develop nervous system in children. B group vitamins can assist
proper development of neurons and gray matter in the brain. Omega 3
fatty acids are those wonderful and natural substances that can act
as effective anti-oxidants in preventing damage to the body system.
Anti-oxidants can improve blood flow and introduce extra doses of
oxygen. Example of good food items are:
Spinach and other dark leafy greens,
blueberries and other berries,
nuts and seeds,
fish such as salmon, herring, tuna, halibut, and mackerel
walnuts and walnut oil
flaxseed and flaxseed oil |
The clutch is a part that joins the engine to the gearbox. The central purpose of the clutch is that it enables you to manage when you require the power from the engine to be sent to the wheels. The clutch has a complicated moving mechanism. It usually consists of a flywheel and a clutch disc, which assists in a steady rotation of the crankshaft. Both assist each other’s functions to send engine power to the gearbox and enables the transmission to be prevented while a gear is selected to move off from a fixed position, or when gears are switched while the car is running.
Various Parts of Clutch
The modern clutch system normally has four main components such as the cover plate, which includes a diaphragm spring, the pressure plate, the driven plate, and the release bearing. The cover plate is bolted to the flywheel, and the pressure plate uses pressure on the driven plate through the diaphragm spring or coil springs on earlier cars. The driven plate works on a splined bar between the pressure plate and flywheel. It is faced on each side with a resistance material which grips the pressure plate and flywheel when fully engaged, and can slip by a measured amount when the clutch pedal is partly depressed, enabling the drive to be exercised smoothly.
How does it work?
The clutch pedal operates the clutch, which is a tool which allows us to either connect or disconnect the engine from the drive wheels either totally or partially and is applied for moving off, changing gear, and stopping. The clutch control is a combination of moving off and stopping.
The clutch in its very simplest form consists of two circular plates. One of these plates is attached to the engine and the other is attached to the gearbox. The clutch usually works as a friction disc that produces a coupling using friction.
In order to guarantee a continuous motion a flywheel has a large moment of inertia is added to ensure a continuous rotation of the crankshaft. Large moments of inertia ensures a large angular momentum and also the system becomes resistant toward any sudden difference in the rotation of the crankshaft. This is important when you change the gear.
When the clutch pedal is pushed the flywheel and clutch disc become disengaged and no power is transferred to the wheels. They both have friction discs present facing each other on their surface. When you release the clutch pedal both the friction plates of both the flywheel and clutch disc come in touch and both rotate at the same speed and as a result, the power is sent to the wheels.
In a standard gearbox, when the clutch is engaged, it supports the delivery of power from the engine to the device which helps in running the vehicle. When you press the clutch pedal, the clutch releases, thus eliminate the connection between the engine and transmission. This enables the driver to shift gears. When the clutch is released, the vehicle will not move forward even if the accelerator pedal is pressed as the engine is now disconnected from the transmission.
So, this is how clutch work in modern cars. Do check out how does a car engine work? |
In countries around the world, natural disasters have been much in the news. If you had a hunch such calamities were increasing, you’re right. In 2017, hurricanes, earthquakes, and wildfires cost $306 billion worldwide, nearly double 2016’s losses of $188 billion.
Natural disasters caused by climate change, extreme weather, and aging and poorly designed infrastructure, among other risks, represent a significant risk to human life and communities. Globally, $94 trillion in new investment is needed to keep pace with population growth, with a large portion of that going toward repair of the built environment. These projects have long cycles due to government authorization processes, huge financial investments, and multi-year building efforts. We need to think creatively about how to accelerate these processes now.
National, state, and local governments and organizations are also grappling with how to update disaster management practices to keep up. The Internet of Things (IoT), artificial intelligence (AI), and machine learning can help. These technologies can improve readiness and lessen the human and infrastructure costs of major events when they do occur. Disaster modeling is an important start and can help shape comprehensive programs to reduce disasters and respond to them effectively.
Anticipating disasters with better data
Fortunately, the science of predicting what’s coming keeps getting better, enabling federal agencies—FEMA, NASA, NOAA—and municipalities to prepare. Organizations already use sensor data, LoRa devices, wireless radio frequency technology, and satellite imagery to predict the impact of disasters. For example, disaster management teams could monitor IoT networks of weather base stations in the Caribbean as an early warning system for hurricanes and tropical storms and sensors on trees for drought conditions that increase the risk of forest fires. SkyAlert, an early warning system in Mexico, uses a mobile app, standalone devices, and an IoT solution that runs on Microsoft Azure to provide alerts to millions of residents up to two minutes before a quake hits, enabling them to move swiftly to safety.
Preparedness at a granular level
IoT sensors and devices that are embedded in infrastructure assets make it possible for public safety officials and development planners to monitor data on roads, bridges, buildings, energy grids, and public transportation—in real time. They can prioritize preventive maintenance and repairs; evaluate whether structures can withstand a coming weather event while continuing normal operations; and close unsafe assets. IoT could make sudden failures like bridge collapses—and the corresponding loss of life and mobility chokepoints—a thing of the past.
Government agencies and localities can also apply AI and machine learning to IoT data sets to predict disaster impacts, so they can identify staging areas, evacuation routes, and flood areas. Such information helps organizations marshal response efforts such as Duke Energy did when it staged 20,000 professionals across the Carolinas to respond to Hurricane Florence.
During a crisis, IoT technology can help by continually updating which evacuation routes are no longer available and what transit options are up and running, for safer, faster mass people movement. Say there’s a fire in a building or a stadium: IoT-powered systems can help direct individuals to all approved exits, while providing updates on which to avoid.
Responding more efficiently
The first 72 hours of a disaster’s aftermath are crucial. Emergency management teams must coordinate, set up operations, search for survivors, and take steps to minimize environmental crises such as chemical contamination. AI and IoT technologies underpin much of this initial response aggregating and analyzing data such as:
- Drone and satellite imagery
- IoT infrastructure data
- AI-powered chatbots
- 911 and reverse 911 systems
- Social media data, such as pleas for help or Facebook’s “mark yourself safe” feature
- Online heat maps
All this information can help teams identify urgent needs, prioritize responses, and avoid wasted effort, but only if it’s decipherable. AI quickly makes sense of the vast torrents of data created during crises and can also predict future developments, such as the potential aftershocks of an earthquake or additional flooding.
Azure Digital Twins technology is a new example of where public safety and emergency response are headed. Digital Twins provides a virtual representation of physical spaces that models relationships among people, places, and devices.
Take the example of a hospital that’s isolated by flooding. A team could model the building layout, identify the location of the highest-needs patients, assess the extent of infrastructure damage, stage vehicles in available parking, and plan evacuation all from a secure location. First responders would know when entrances and exits are blocked, find other ways in, move rapidly through spaces, and deliver triage faster. Combining IoT sensor data, social media messages, mobile communications from first responders, and robot exploration of degraded spaces enables this technology to save more lives faster.
Azure Maps is another example of how public safety and emergency response are evolving. Azure Maps used for traffic services enable real-time traffic flow and incident data. They can power digital signs diverting the public away from an incident, for example. As a result, emergency responders can more easily find the fastest path to help those in need.
Improved planning and relief efforts through analytics
AI and machine learning can help public safety officials refine strategies over time, getting smarter about planning and response. AI can be used to analyze event data for patterns, identify current at-risk areas and populations, and model future needs, based on population growth, development, and climate change, among other variables. Government leaders can use these insights to craft policies that reduce the impact of disasters on communities, like planning new buildings in less vulnerable areas.
Not every crisis is avoidable, but we now have the technology to predict and prevent catastrophes such as oil spills or building collapses. When unpredictable natural disasters do strike, responders can gain access to real-time data that aims aid where it needs to be faster, reducing additional loss of life.
Following a crisis, hindsight is 20/20. But AI and machine learning are making foresight a lot easier when it comes to disaster management. |
The wave of border changes that swept over Europe after the end of the First World War also affected the very north of Germany, where the collapsed empire bordered on neutral Denmark. As in many other regions with a mixed population, on the initiative of the Entente, plebiscites took place in the border Schleswig on February 10 and March 14, 1920. Schleswig, which seceded from Denmark at the end of the 1864 War, voted predictable: the German-Danish border has moved to the south. The north of Schleswig decided to return to Denmark, and the southern regions chose to stay with Germany. This border remains unshakable to this day, although both sides had convenient moments to change it — both in 1940, when all of Denmark was captured by the Wehrmacht in a matter of hours, and in 1945, when Germany lost the world war again.
The plebiscites of 1920 were preceded by an agitation campaign. Studying Danish and German posters, we can conclude that the degree of mutual hatred between Germans and Danes did not reach the level that was observed between Germans and Poles in Upper Silesia a year later, in March 1921. |
This lesson plan can be used as a companion to Module 7 of Money and Youth – Are You an Entrepreneur?
Relevant Subjects and Topics:
Entrepreneurship, Business Studies, Careers, Family Studies
As adolescents prepare to enter adulthood they are faced with many decisions that will affect the remainder of their lives. One obvious issue is the daunting decision of career choice. With the rapidity of change facing them, they may very well experience a number of different job situations over the course of their working lives, but initially they have choices to make. In order to make wise decisions they should “get to know themselves” but understanding such things as their desired life style and their degree of tolerance for such things as change, independence and risk to name only a few. This lesson will direct their attention to entrepreneurship in order to have them reflect upon this option to see if it is suitable for them. In this way, the students will come to “know themselves a little better” and will be better able to understand if this type of career is for them.
At the end of this lesson, students will be able to:
- Explain what is involved in being an entrepreneur
- Indicate whether or not this career choice is of interest to them
- Better identify their personal career choice preferences
Time for Implementation:
Two class periods of approximately 60 minutes
Teaching and Learning Strategies:
Period One: 60 minutes
- Begin this lesson by asking the students to define the term “entrepreneur.”
- Once they have provided their definition, conduct a teacher-led quiz using the quiz on page 81 of Money and Youth.
- Take up the answers to the quiz found on page 82.
- Divide the class into four groups and either project the image of the skeleton found under “Handouts/Resources” below or provide each group with a copy of the slide.
- Ask them to complete the slide in their groups and then hold a plenary session during which they can give their ideas.
- Introduce an activity called “Heads Together” during which they will be in competition with the other groups and have to address a series of questions.
- Explain how the game will work:
- A question will be asked and then a call of “Heads together” will be made.
- The group will literally move closely together, putting their heads in so they can discuss their answer quietly.
- The call “Heads apart” will be made, the students will return to normal posture and then one group will be asked to provide an answer.
- If another group can improve upon the answer with one additional piece of relevant information then they will take the lead in being awarded the points for that question.
- If another group can provide additional information then they will take the lead.
- Once no additional information can be added the group will be awarded the points.
- This process will be repeated for each question and the winning group will be the one with the most points.
- Begin the activity by asking the groups to address the following question:
- What things should an entrepreneur do to search for opportunities?
- Allow a predetermined amount of time for the groups to address the question and then call “Heads Apart” and begin the game.
Period 2: 60 minutes
- Review the previous period’s work and ask the following question and begin the game again:
- How does an entrepreneur assess the opportunities she or he has discovered?
- Once this question has been answered, continue the game by asking:
- Once an entrepreneur has decided on an opportunity, what should be done to generate good ideas about that opportunity?
- What ways would an entrepreneur pursue to ensure that the ideas generated are good ideas?
- Having completed the questions and identified a winning group, and, as a concluding activity, assign one of the questions used in Heads Together to each group.
- Ask them to check the appropriate pages of Module 7 in Money and Youth and compare their answers to the material in the book and report any additional information to the class.
- The group results could be recorded.
Modifications or Suggestions for Different Learners:
- Throughout the group activities there are opportunities for different skills to be utilized - such as recording, reporting and presenting.
Additional Related Links:
Additional Possible Activities:
- The students could write a short piece indicating whether or not they thought they were an entrepreneur giving reasons for their response.
- The students could research who in Canada was considered to be a top-notch entrepreneur and why. |
Although Java is object-oriented to a great extent, it isn't a pure object-oriented language. One of the reasons Java isn't purely object-oriented is that not everything in it is an object. For example, Java allows you to declare variables of primitive types (
boolean, etc.) that aren't objects. And Java has static fields and methods, which are independent and separate from objects. This article will advise you on how to use static fields and methods in a Java program while maintaining an object-oriented focus in your designs.
The lifetime of a class in a Java virtual machine (JVM) has many similarities to the lifetime of an object. Just as an object can have state, represented by the values of its instance variables, a class can have state, represented by the values of its class variables. Just as the JVM sets instance variables to default initial values before executing initialization code, the JVM sets class variables to default initial values before executing initialization code. And like objects, classes can be garbage collected if they are no longer referenced by the running application.
Nevertheless, significant differences exist between classes and objects. Perhaps the most important difference is the way in which instance and class methods are invoked: instance methods are (for the most part) dynamically bound, but class methods are statically bound. (In three special cases, instance methods are not dynamically bound: invocation of private instance methods; invocation of
init methods (constructors); and invocations with the
super keyword. See Resources for more on this.)
Another difference between classes and objects is the degree of data hiding granted by the private access levels. If an instance variable is declared private, only instance methods can access it. This enables you to ensure the integrity of the instance data and make objects thread safe. The rest of the program cannot access those instance variables directly, but must go through the instance methods to manipulate the instance variables. In an effort to make a class behave like a well-designed object, you can make class variables private and define class methods that manipulate them. Nevertheless, you don't get as good a guarantee of thread safety or even data integrity in this way, because a certain kind of code has a special privilege that gives them direct access to private class variables: instance methods, and even initializers of instance variables, can access those private class variables directly.
So the static fields and methods of classes, although similar in many ways to the instance fields and methods of objects, have significant differences that should affect the way you use them in designs.
Treating classes as objects
As you design Java programs, you will likely encounter many situations in which you feel the need for an object that acts in some ways like a class. You may, for example, want an object whose lifetime matches that of a class. Or you may want an object that, like a class, restricts itself to a single instance in a given name space.
In design situations such as these, it can be tempting to create a class and use it like an object in order to define class variables, make them private, and define some public class methods that manipulate the class variables. Like an object, such a class has state. Like a well-designed object, the variables that define the state are private, and the outside world can only affect this state by invoking the class methods.
Unfortunately, some problems exist with this "class-as-object" approach. Because class methods are statically bound, your class-as-object won't enjoy the flexibility benefits of polymorphism and upcasting. (For definitions of polymorphism and dynamic binding, see the Design Techniques article, "Composition versus Inheritance.") Polymorphism is made possible, and upcasting useful, by dynamic binding, but class methods aren't dynamically bound. If someone subclasses your class-as-object, they won't be able to override your class methods by declaring class methods of the same name; they'll only be able to hide them. When one of these redefined class methods is invoked, the JVM will select the method implementation to execute not by the class of an object at runtime, but by the type of a variable at compile time.
In addition, the thread safety and data integrity achieved by your meticulous implementation of the class methods in your class-as-object is like a house built of straw. Your thread safety and data integrity will be guaranteed so long as everyone uses the class methods to manipulate the state stored in the class variables. But a careless or clueless programmer could, with the addition of one instance method that accesses your private class variables directly, inadvertently huff and puff and blow your thread safety and data integrity away.
For this reason, my main guideline concerning class variables and class methods is:
If you want some state and behavior whose lifetime matches that of a class, avoid using class variables and class methods to simulate an object. Instead, create an actual object and use a class variable to hold a reference to it and class methods to provide access to the object reference. If you want to ensure that only one instance of some state and behavior exists in a single name space, don't try to design a class that simulates an object. Instead, create a singleton -- an object guaranteed to have only one instance per name space.
So what are class members good for?
In my opinion, the best mindset to cultivate when designing Java programs is to think objects, objects, objects. Focus on designing great objects, and think of classes primarily as blueprints for objects -- the structure in which you define the instance variables and instance methods that make up your well-designed objects. Besides that, you can think of classes as providing a few special services that objects can't provide, or can't provide as elegantly. Think of classes as:
Methods that don't manipulate or use the state of an object or class I call utility methods. Utility methods merely return some value (or values) calculated solely from data passed to the method as parameters. You should make such methods static and place them in the class most closely related to the service the method provides.
An example of a utility method is the
String copyValueOf(char data) method of class
String. This method produces its output, a return value of type
String, solely from its input parameter, an array of
copyValueOf() neither uses nor affects the state of any object or class, it is a utility method. And, like all utility methods should be,
copyValueOf() is a class method.
So one of the main ways to use class methods is as utility methods -- methods that return output calculated solely from input parameters. Other uses of class methods involve class variables.
Class variables for data hiding
One of the fundamental precepts in object-oriented programming is data hiding -- restricting access to data to minimize the dependencies between the parts of a program. If a particular piece of data has limited accessibility, that data can change without breaking those portions of the program that can't access the data.
If, for example, an object is needed only by instances of a particular class, a reference to it can be stored in a private class variable. This gives all instances of this class handy access to that object -- the instances just use it directly -- but no other code anywhere else in the program can get at it. In a similar fashion, you can use package access and protected class variables to reduce the visibility of objects that need to be shared by all members of a package and subclasses.
Public class variables are a different story. If a public class variable isn't final, it is a global variable: that nasty construct that is the antithesis of data hiding. There is never any excuse for a public class variable, unless it is final.
Final public class variables, whether primitive type or object reference, serve a useful purpose. Variables of primitive types or of type
String are simply constants, which in general help to make programs more flexible (easier to change). Code that uses constants is easier to change because you can change the constant value in one place. Public final class variables of reference types allow you to give global access to objects that are needed globally. For example,
System.err are public final class variables that give global access to the standard input output and error streams.
Thus the main way to view class variables is as a mechanism to limit the accessibility of (meaning to hide) variables or objects. When you combine class methods with class variables, you can implement even more complicated access policies.
Using class methods with class variables
Aside from acting as utility methods, class methods can be used to control access to objects stored in class variables -- in particular, to control how the objects are created or managed. Two examples of this kind of class method are the
getSecurityManager() methods of class
System. The security manager for an application is an object that, like the standard input, output, and error streams, is needed in many different places. Unlike the standard I/O stream objects, however, a reference to the security manager is not stored in a public final class variable. The security manager object is stored in a private class variable, and the set and get methods implement a special access policy for the object.
Java's security model places a special restriction on the security manager. Prior to Java 2 (aka JDK 1.2), an application began its life with no security manager (
null). The first call to
setSecurityManager() established the security manager, which thereafter was not allowed to change. Any subsequent calls to
setSecurityManager() would yield a security exception. In Java 2, the application always starts out with a security manager, but similar to the previous versions, the
setSecurityManager() method will allow you to change the security manager one time, at the most.
The security manager provides a good example of how class methods can be used in conjunction with private class variables to implement a special access policy for objects referenced by the class variables. Aside from utility methods, think of class methods as the means to establish special access policies for object references and data stored in class variables.
The main point of advice given in this article is:
If you need an object, make an object. Restrict your use of class variables and methods to defining utility methods and implementing special kinds of access policies for objects and primitive types stored in class variables. Although not a pure object-oriented language, Java is nevertheless object-oriented to a great extent, and your designs should reflect that. Think objects.
Next month's Design Techniques will be the last. I'll soon begin writing a book based on the Design Techniques material, Flexible Java, and will place that material on my Web site as I go. So please follow that project along and send me feedback. After a break of a month or two, I'll be back at JavaWorld with a new column focused on Jini.
A request for reader participation
I encourage your comments, criticisms, suggestions, flames -- all kinds of feedback -- about the material presented in this column. If you disagree with something, or have something to add, please let me know.
This article was first published under the name Design with Static Members in JavaWorld, a division of Web Publishing, Inc., February 1999.
Have an opinion? Be the first to post a comment about this article.
Bill Venners has been writing software professionally for 12 years. Based in Silicon Valley, he provides software consulting and training services under the name Artima Software Company. Over the years he has developed software for the consumer electronics, education, semiconductor, and life insurance industries. He has programmed in many languages on many platforms: assembly language on various microprocessors, C on Unix, C++ on Windows, Java on the Web. He is author of the book: Inside the Java Virtual Machine, published by McGraw-Hill. |
In 1964, Benjamin Bloom expressed his belief that there are two domains of learning: the cognitive and the affective. Unfortunately affective variables have not been thoroughly studied in the various researches and papers having as subject the process of second language acquisition and learning. Empathy, aggression, anxiety, imitation, inhibition, introversion and extroversion are affective variables that have only been slightly tackled by specialists in the domain of SLA (Brown, 2006).
“Becoming bilingual is a way of life. Your whole person in affected as you struggle to reach beyond the confines of your first language and into a new language, a new culture, a new way of thinking, feeling and acting” (Brown, 1994). According to Douglas Brown and not only the learning of a foreign language is a process that requires a lot of energy and dedication, and which at the same time comprises a series of psychological, intellectual and emotional variables.
According to Xu and Huang (2010), the emotions that affect language acquisition can be classified in personality factors and factors related to the relationship extant between the teacher and the students. The personality factors include motivation, self-esteem, anxiety and inhibition. The other category includes empathy, classroom transactions and cross-cultural processes.
Inhibition in pronunciation
Inhibition is considered a personality factor that reflects a very close relationship with self-esteem. Inhibition in pronunciation may manifest itself in the case of perfectionist students who have more difficulties in learning a second language than those who are more open and have a higher tolerance of ambiguity. Some students are so embarrassed and self-conscious that they do their best to avoid classroom conversation activities which are essential for the development of language skills.
The second language is not a comfortable language not even for those who have lived in the culture of the respective language and have attended schools where teaching was done in the second language. Adults possess many inhibitions and attitudes about speaking a foreign language and therefore they are less likely to attempt meaningful learning.
Cause of inhibition in pronunciation
Brown (1994) pointed out that many students are inhibited during foreign language classes because learning a language involves a certain amount of self-exposure and making mistakes. Students may consider these mistakes as a threat to their self-esteem.
Speaking activities can fail due to students’ inhibition. These activities imply exposure in front of the whole class and this can give shier students stage fright. Students might be inhibited about being criticized or being laughed at. SLA students often do not have confidence in their speaking abilities and believe that do not have enough language skills to express what they want to say.
The teacher’s attitude towards mistakes is of utmost importance. If the teacher has a negative attitude, than he/she can create learning blocks which can hinder the acquisition of language. However, nowadays more and more foreign language teachers are focusing their attention on the students’ abilities and not on their weak points. Behaving as such, they manage to determine their students to overcome their inhibition in pronunciation (Brown, 2000).
Dominant students are also a source of language inhibition. There are always students as such in almost every SLA class and they hinder the shier students to express their opinion and knowledge. Dominant students interrupt and constantly look for teacher’s attention thus creating a class environment in which timid students stand happily aside.
Good pronunciation is also hindered by the constant usage of the mother tongue during the SLA classes. Students who insist in using their first language are afraid of being criticized and therefore, the teacher has to encourage them to speak in the target language. The learners’ must understand that the usage of their mother tongue slows down the oral progress.
Brown (2000) considers that inhibition in pronunciation is also due to the learners’ inability to speak with a proper accent. Lots of people consider that a native-like accent is an extremely important proof of successful acquisition. Many adults who acquired a second language can master grammar and communicative functions, but they still have not a foreign accent.This does not mean that their acquisition of the second language was not successful.
Teaching strategies for overcoming inhibition in pronunciation
There are many teaching strategies that can be used to help students overcome their inhibition in pronunciation. Certain strategies may be more effective with certain leaners than with others. There are students who may benefit a lot from the regular application of the same strategies. Other students may need more strategies applied.
The teacher plays a very important role in helping students overcoming their language inhibitions. He/She must be very alive, dramatic and enthusiastic. The more uninhibited the teacher, the more likely the students will lose their inhibitions (Harmer, 2006).
As Harmer (2006) points it out the teacher has to be active; he or she must raise the arms, click the fingers and point to the student in order to get him/her to respond. Students must not be asked to “Repeat after me”, but they have to be trained to answer in a quick-fire fashion to the teacher’s movements. Students should be encouraged to use dramatic and exaggerate gestures when role playing or when reading dialogues.
The teacher has to be unpredictable. For example, when students are repeating sentences and then using substitutions in the sentences, the teacher should spin around, as he points to the student expected to answer. This has to be done in a random way so the students are kept alert, not knowing who is going to be asked next (Harmer, 2006).
Position of students’ desks is also important in order to overcome inhibition. Thus, the best solution would be to put the desk in a circle so as to create open space and room for movement (Harmer, 2006).
When students use words and gestures in ways they have never done before, their self-confidence increases since they know that during the class, they can become a different person. This helps students a lot in overcoming their language inhibition thus leading to a successful acquisition of the second language. |
A group of scientists at the University of Bristol has announced that they have successfully encapsulated radioactive material inside diamonds. This feat would convert radioactive radiation into sustainable electricity via diamond batteries.
The result? Zero CO2 emissions and clean energy for several thousand years, according to the Korii website, which reports the information.
Because everyone knows that the management of waste from the nuclear sector – underground, in power plants or warehouses – is a real concern.
A DIAMOND THAT GENERATES ELECTRICITY
By closely studying the famous carbon-14, the team found that its radioactivity was mainly on its outer surface and, after heating it, it turned into gas and evaporated. The researchers were then able to capture this gas to solidify it into diamond, another form of carbon. In doing so, they found that the diamond thus created generated electricity.
Scientists encapsulated the thing in a larger, non-radioactive diamond. To their satisfaction, this diamond battery has been able to generate current over an extremely long period of time and without emitting any pollution. A real revolution.
Korii specifies that a battery of this type would allow a smartphone, an electric car, or any electronic device to function until the end without ever having to be recharged.
It could also equip pacemakers so that the battery is not changed during an operation, as well as space exploration modules in the context of very long missions.
The University of Bristol, keen to collect as many ideas as possible, has launched a social media campaign using the hashtag #diamondbatteries. |
Apartheid was a system of racial segregation from 1948 to 1994 enforced through legislation in South Africa by the National Party government. The word Apartheid means “the state of being apart” which is an Afrikaans word. The government classified the population into four main racial groups which would be the premise of majority of the laws and policies to come for South Africa. The racial classifications that the government defined were White, B...
... middle of paper ...
... discrimination and segregation since the Apartheid has ended; however, there is much progress that still needs to be made. With the non-whites that are in power now, having a lack of education, many aspect of the government have suffered. Nevertheless, the current changes that have been put into action regarding education has provided the non-whites that are young now, the ability to learn the essential things needed for their future, the country’s future, and provide a wider scope of possible jobs. With all the citizens of South Africa having the same mandatory level of education, this will begin to diminish the academic gap between the whites and non-whites. If progress continues as the way that it has since the end of Apartheid, racial discrimination and segregation will decrease and the non-whites will stop being seen as an inferior, and rather as an equal.
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- According to Merriam Webster Dictionary, grade inflation is, “the assigning of grades higher than previously assigned for given levels of achievement.” This means that grades are designed to recognize various levels of success, making them an important aspect of the education system in countries across the globe. They help determine not only where students are accepted, but help students earn scholarship dollars to aid students in paying for their education. However, there has been a rapid increase in the amount of A’s awarded to students in America to help those trying to get into and pay for college rather than earning A’s for the content of their work.... [tags: High school, Higher education, Education]
1896 words (5.4 pages)
- Education is an important aspect of a person’s life as they spend an average of 13 years, just to acquire a high school degree. However in recent years, the US Congress passed the No Child Left Behind Act: an act that requires states to administer assessments in order to receive funding from the federal government and to enable that all the states to have academically equivalent. In effect to this act, schools have now changed their whole curriculum to teach these standardized tests and pay their teachers based on the results.... [tags: Education, High school, Teacher, Standardized test]
820 words (2.3 pages)
- Nowadays, many countries offer dance courses in the Primary school, Middle school, High school and Colleges, but still most countries do not realize the importance of dance education for children. In many countries, there are few schools, schools of art usually only two kinds of music and art offer dance classes, and dance only occasionally appear in campus activities. They ignore the benefits that dance brings to us, such as an improved the ability of cooperation, collective sense of honor, Self-confidence, Exercise perseverance, and so on.... [tags: Dance, Education, Dance therapy, High school]
1478 words (4.2 pages)
- What is Education. Education has been an important aspect in people’s lives. As children What is Education. Education has been an important aspect in people’s lives. As children, they start their academic careers in elementary school and as they grow older they move onto middle school, and then finally high school. I believe that from elementary school to high school, students are getting the minimal amount of education. Some people stop their education after their grade twelve year and some go onto post-secondary institutions.... [tags: Education]
977 words (2.8 pages)
- Transferrable skills development is essential in order to increase one’s chance in their career fields. While this is an important aspect of job searching, most students do not spend enough time to enhance these skills. There are various opportunities where a student is able to develop these skills, including post-secondary institutes and work placements. In a school setting, students are able to learn the theoretical aspects while in a placement setting, students receive hands-on experience. What most students do not realize is, many assignments in university contain transferrable skills; however, most students are unable to identify the relevance of these assignments to their future job fi... [tags: Employment, Higher education, Future, Education]
719 words (2.1 pages)
- When most people are asked what they want to be when they grow up the most common answers are: Doctor, Lawyer, Police Officer or Firefighter. It is very un common however to find someone who is doing what they always wanted to. Something they experienced put a strong impact on their life and changed their entire view. For some that impact is in a music class as it was for me and they are the great educators that make up our education system. I want to teach music because I have a heavy love and appreciation for music and always have.... [tags: Education, Curriculum, Teacher]
1068 words (3.1 pages)
- What defines culture. Is it how we picture our selves in a society, our daily practices or even how we interact within our community. The answer to that question is hard to come across as almost every experienced anthropologist will proclaim, culture is strictly personal. It is what makes us who we are our beliefs and opinions on the world. As reference in the Mariam Webster Dictionary, culture is “ the beliefs, customs, arts, etc., of a particular society, group, place, or time” “Culture." Merriam-Webster.... [tags: Culture, Education, Sociology, High school]
1347 words (3.8 pages)
- Experts have stated for years that communication is the most important aspect of any relationship or partnership. We find this to be true in a long-lasting marriage as well as any limited-liability corporation (LLC) grown between two best friends. The truth in the teacher/home/school relationship is that it comes with a totally different reality. Teachers and parents are drawn together because of the their children’s required education. The law mandates it!!. There is no past in which this relationship has been given time in which to develop and grow like a good marriage or business.... [tags: Education, High school, United Kingdom, Teacher]
1109 words (3.2 pages)
- Lesson pacing is a very important aspect of teaching as well as student learning. Lesson pacing is just as it sounds, the rate of speed at which a lesson moves along and can be adjusted to a rate at which student learn best. Lesson pacing is extremely important because it is one of many aspects in which influences the successfulness of students. Lesson pacing is needed in a classroom because it allows the teacher to keep kids motivated and engaged. For example, a teacher can speed up or slow down a lesson so that it garners and maintains student engagement.... [tags: Education, Teacher, Learning, English language]
793 words (2.3 pages)
- The religious and cultural aspect of sex education in schools is also important to this topic. Sexual education in schools is a highly debatable topic. Some religious and cultural groups are opposed to it. As our community becomes more diverse, more cultures become integrated into the “mainstream”. For example, Muslim parents may by offended by sexual education at schools because they view it as promotion of sex outside of marriage (Zimmerman J.) This is goes against some of the fundamental pillars of faith for Islam.... [tags: Birth control, Sex education]
1128 words (3.2 pages) |
Step back in time 290 million years to when bizarre-looking creatures dominated life on land and sea, and dinosaurs had not yet evolved. Find out about the most devastating mass extinction the world has ever seen.
Permian Monsters: Life Before the Dinosaurs, opening Friday, Jan. 8, blends vivid artwork, amazing fossils and full-size scientifically accurate models of moving beasts to recreate this relatively unknown period that ended with the most devastating extinction of life.
Visitors will explore odd-looking sharks, strange reptilelike precursors of mammals, a vicious giant saber-toothed gorgonopsid, and other extinct creatures that ruled the world millions of years before the dinosaurs.
The Permian period lasted from 299 to 251 million years ago and produced the first large plant-eating and meat-eating animals. The period ended with the extinction of some 90% of all life. What caused this mass extinction had baffled scientists for the last 20 years, but a recent discovery shed new light on the cause: global warming.
“The Permian is a really interesting time in the history of life and the history of the earth,” said Ted Daeschler, PhD, a vertebrate paleontologist at the Academy and professor at Drexel University. “This is a great opportunity for kids and adults to explore an unfamiliar part of the fossil record.”
Exhibits Senior Director Jennifer Sontchi and Vertebrate Zoology Associate Curator Ted Daeschler explain the creatures of the Permian. Credit: MLTV-Main Line Network
Visitors will learn how this now familiar phenomenon — the long term warming of the planet — was triggered millions of years ago in another geological period by a huge volcanic eruption that set off a chain of events that led to the vast extinction of plants and animals.
Today we are experiencing another mass extinction of species, but this time there is a different trigger to global warming than in the Permian period. In the current geological period, called Anthropocene, human activity has been the dominant influence on climate and the environment.
“There is much to be learned by looking at our past to understand our future,” said Academy President and CEO Scott Cooper. “Permian Monsters: Life Before the Dinosaurs is an excellent place for both adults and children to start to understand how life looked millions of years ago and what that means for us today.
“These beasts are truly fascinating to see, and some people may think they resemble the dinosaurs,” Cooper said. “It’s hard to image them walking and swimming through our world today, yet life as we know it today traces its origins to some of these creatures. We hope everyone who visits Permian Monsters will be inspired to take care of our precious world.”
The Art and Science of Permian Monsters
Permian Monsters blends art and science with a collection of new vivid artwork created through the vision of award-winning paleo-artist Julius Csotonyi. Visitors will see casts of fossilized skeletons, scientifically accurate 3D sculptures, and full-size beasts including seven that move with animatronics.
View giant insects, bizarre-looking sharks, long extinct sea creatures and strange herbivorous and carnivorous reptilelike animals that predated mammals. Meet the top predator of the time and find out what nearly killed them all to make way for Earth’s next rulers: the dinosaurs!
Permian Monsters: Life Before the Dinosaurs will be on view through Jan. 17, 2022. Permian Monsters was developed by Gondwana Studios, Tasmania, Australia
Admission to Permian Monsters is by timed ticket; to purchase tickets, visit ansp.org.
The Permian Explained
What is the Permian?
The Permian is a geological record that began nearly 300 million years ago, almost 50 million years before the Age of the Dinosaurs. During the Permian the first large herbivores and carnivores became widespread on land. The Permian ended with the largest mass extinction in the history of the Earth.
What was the Earth like?
During the Permian, the Earth’s land masses were joined in one supercontinent known as Pangea.
The Permian began toward the end of an ice age; therefore, the Earth was cooler than it is today. By the end of the early Permian, the icecaps melted, and the Earth slowly warmed up, becoming a lush green planet, where both animal and plant life thrived.
What life forms existed during the Permian?
Plant life consisted mostly of ferns, conifers and small shrubs.
Animals included sharks, bony fish, arthropods, amphibians, reptiles and synapsids. The first true mammals would not appear until the next geological period, the Triassic.
How long did it last?
The Permian Period lasted nearly 47 million years. It ended 252 million years ago with the start of the Triassic Period.
How did it end?
The Permian ended with the largest mass extinction in the history of Earth: some 90% of all plant and animal life was wiped out. By the end of the Permian, the Earth had become a “biological desert.”
By Carolyn Belardo. Photos by Bruce Tepper.
Please consider a donation to support the Academy’s efforts to ensure a healthy, sustainable and equitable planet. |
The Planck constant, or Planck's constant, denoted , is a physical constant that is the quantum of electromagnetic action, which relates the energy carried by a photon to its frequency. A photon's energy is equal to its frequency multiplied by the Planck constant. The Planck constant is of fundamental importance in quantum mechanics, and in metrology it is the basis for the definition of the kilogram.
|Value of h||Units||Ref.|
|Values of ħ (h-bar)||Units||Ref.|
|Values of hc||Units||Ref.|
|Values of ħc (h-bar)||Units||Ref.|
At the end of the 19th century, physicists were unable to explain why the observed spectrum of black body radiation, which is still considered to have been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, Max Planck empirically derived a formula for the observed spectrum. He assumed that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, , from the experimental measurements, and that constant is named in his honor. In 1905, the value was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave. It was eventually called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".
Since energy and mass are equivalent, the Planck constant also relates mass to frequency.
Origin of the constant
Every physical body spontaneously and continuously emits electromagnetic radiation. At low frequencies, Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation, but there was no overall expression or explanation for the shape of the observed emission spectrum.
Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum. To create Planck's law, which correctly predicts blackbody emissions by fitting the observed curves, he multiplied the classical expression by a factor that involves a constant, , in both the numerator and the denominator, which subsequently became known as the Planck Constant.
The spectral radiance of a body, , describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency.
The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In this case, it is given by
The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. The SI units of are W·sr−1·m−2·Hz−1, while those of are W·sr−1·m−3.
Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics." One of his new boundary conditions was
to interpret UN [the vibrational energy of N oscillators] not as a continuous, infinitely divisible quantity, but as a discrete quantity composed of an integral number of finite equal parts. Let us call each such part the energy element ε;
With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption … actually I did not think much about it…" in his own words, but one which would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation":
Planck was able to calculate the value of from experimental data on black-body radiation: his result, 6.55×10−34 J⋅s, is within 1.2% of the currently accepted value. He also made the first determination of the Boltzmann constant from the same data and theory.
Development and application
The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".
The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, when his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real.
Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light.
The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.
Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation:
Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant .
Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model. In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies
where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . Once the electron reached the lowest energy level (), it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants.
Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if is the total angular momentum of a system with rotational invariance, and the angular momentum measured along any given direction, these quantities can only take on the values
The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey
where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise.
In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator :
where is the Kronecker delta.
This energy is extremely small in terms of ordinarily perceived everyday objects.
The de Broglie wavelength λ of the particle is given by
In applications where it is natural to use the angular frequency (i.e. where the frequency is expressed in terms of radians per second instead of cycles per second or hertz) it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant. It is equal to the Planck constant divided by 2π, and is denoted ħ (pronounced "h-bar"):
The energy of a photon with angular frequency ω = 2πf is given by
while its linear momentum relates to
where k is an angular wavenumber. In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics.
Problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI. In the language of quantity calculus, the expression for the "value" of the Planck constant, or of a frequency, is the product of a "numerical value" and a "unit of measurement". When we use the symbol f (or ν) for the value of a frequency it implies the units cycles per second or hertz, but when we use the symbol ω for its value it implies the units radians per second; the numerical values of these two ways of expressing the value of a frequency have a ratio of 2π, but their values are equal. Omitting the units of angular measure "cycle" and "radian" can lead to an error of 2π. A similar state of affairs occurs for the Planck constant. We use the symbol h when we express the value of the Planck constant in J⋅s/cycle, and we use the symbol ħ when we express its value in J⋅s/rad. Since both represent the value of the Planck constant, but in different units, we have h = ħ. Their "values" are equal but, as discussed below, their "numerical values" have a ratio of 2π. In this Wikipedia article the word "value" as used in the tables means "numerical value", and the equations involving the Planck constant and/or frequency actually involve their numerical values using the appropriate implied units. The distinction between "value" and "numerical value" as it applies to frequency and the Planck constant is explained in more detail in this pdf file Link.
These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors.
Classical statistical mechanics requires the existence of h (but does not define its value). Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some integer multiple of a very small quantity, the "quantum of action", now called the reduced Planck constant or the natural unit of action. This is the so-called "old quantum theory" developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been largely replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist, rather, the particle is represented by a wavefunction spread out in space and in time. Thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of classical particle motion.
The Planck constant has dimensions of physical action; i.e., energy multiplied by time, or momentum multiplied by distance, or angular momentum. In SI units, the Planck constant is expressed in joule-seconds (J⋅s or N⋅m⋅s or kg⋅m2⋅s−1). Implicit in the dimensions of the Planck constant is the fact that the SI unit of frequency, the Hertz, represents one complete cycle, 360 degrees or 2π radians, per second. An angular frequency in radians per second is often more natural in mathematics and physics and many formulas use a reduced Planck constant (pronounced h-bar)
- The above values are recommended by 2018 CODATA.
In atomic units,
Understanding the 'fixing' of the value of h
Since 2019, the numerical value of the Planck constant has been fixed, with finite significant figures. Under the present definition of the kilogram, which states that "The kilogram [...] is defined by taking the fixed numerical value of h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light c and duration of hyperfine transition of the ground state of an unperturbed Cesium-133 atom ΔνCs." This implies that mass metrology is now aimed to find the value of one kilogram, and thus it is kilogram which is compensating. Every experiment aiming to measure the kilogram (such as the Kibble balance and the X-ray crystal density method), will essentially refine the value of a kilogram.
As an illustration of this, suppose the decision of making h to be exact was taken in 2010, when its measured value was 6.62606957×10−34 J⋅s, thus the present definition of kilogram was also enforced. In future, the value of one kilogram must have become refined to 6.62607015/ ≈ 1.0000001 times the mass of the International Prototype of the Kilogram (IPK), neglecting the metre and second units' share, for sake of simplicity.
Significance of the value
The Planck constant is related to the quantization of light and matter. It can be seen as a subatomic-scale constant. In a unit system adapted to subatomic scales, the electronvolt is the appropriate unit of energy and the petahertz the appropriate unit of frequency. Atomic unit systems are based (in part) on the Planck constant. The physical meaning of the Planck's constant could suggest some basic features of our physical world.
The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant (the quantum of action) is very small. One can regard the Planck constant to be only relevant to the microscopic scale instead of the macroscopic scale in our everyday experience.
Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, green light with a wavelength of 555 nanometres (a wavelength that can be perceived by the human eye to be green) has a frequency of 540 THz (540×1012 Hz). Each photon has an energy E = hf = 3.58×10−19 J. That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, NA = 6.02214076×1023 mol−1, with the result of 216 kJ/mol, about the food energy in three apples.
In principle, the Planck constant can be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods.
Since the value of the Planck constant is fixed now, it is no longer determined or calculated in laboratories. Some of the practices given below to determine the Planck constant are now used to determine the mass of the kilogram. The below-given methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect.
The Josephson constant KJ relates the potential difference U generated by the Josephson effect at a "Josephson junction" with the frequency ν of the microwave radiation. The theoretical treatment of Josephson effect suggests very strongly that KJ = 2e/h.
The Josephson constant may be measured by comparing the potential difference generated by an array of Josephson junctions with a potential difference which is known in SI volts. The measurement of the potential difference in SI units is done by allowing an electrostatic force to cancel out a measurable gravitational force, in a Kibble balance. Assuming the validity of the theoretical treatment of the Josephson effect, KJ is related to the Planck constant by
A Kibble balance (formerly known as a watt balance) is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units. From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement of KJ2RK is a direct determination of the Planck constant.
The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at 25 °C is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γ′p. The gyromagnetic ratio is related to the shielded proton magnetic moment μ′p, the spin number I (I = 1⁄2 for protons) and the reduced Planck constant.
The ratio of the shielded proton magnetic moment μ′p to the electron magnetic moment μe can be measured separately and to high precision, as the imprecisely known value of the applied magnetic field cancels itself out in taking the ratio. The value of μe in Bohr magnetons is also known: it is half the electron g-factor ge. Hence
A further complication is that the measurement of γ′p involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol Γ′p-90 is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value Γ′p-90(hi) is of interest in determining the Planck constant.
Substitution gives the expression for the Planck constant in terms of Γ′p-90(hi):
The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant.
X-ray crystal density
The X-ray crystal density method is primarily a method for determining the Avogadro constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by
The experimental measurement of the Planck constant in the Large Hadron Collider laboratory was carried out in 2011. The study called PCC using a giant particle accelerator helped to better understand the relationships between the Planck constant and measuring distances in space.
- Set on 20 November 2018, by the CGPM to this exact value. This value took effect on 20 May 2019.
- The value is exact but not expressible as a finite decimal; approximated to 9 decimal places only.
- The value is exact but not expressible as a finite decimal; approximated to 8 decimal places only.
- The value is exact but not expressible as a finite decimal; approximated to 10 decimal places only.
- "Resolutions of the 26th CGPM" (PDF). BIPM. 2018-11-16. Retrieved 2018-11-20.
- "2018 CODATA Value: Planck constant". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
- "Resolutions of the 26th CGPM" (PDF). BIPM. 2018-11-16. Retrieved 2018-11-20.
- Planck, Max (1901), "Ueber das Gesetz der Energieverteilung im Normalspectrum" (PDF), Ann. Phys., 309 (3): 553–63, Bibcode:1901AnP...309..553P, doi:10.1002/andp.19013090310. English translation: "On the Law of Distribution of Energy in the Normal Spectrum Archived 2008-04-18 at the Wayback Machine"."Archived copy" (PDF). Archived from the original (PDF) on 2011-10-06. Retrieved 2011-10-13.CS1 maint: archived copy as title (link)
- Planck 1914, pp. 6, 168
- Chandrasekhar 1960, p. 8
- Rybicki & Lightman 1979, p. 22
- Shao, Gaofeng; et al. (2019). "Improved oxidation resistance of high emissivity coatings on fibrous ceramic for reusable space systems". Corrosion Science. 146: 233–246. arXiv:1902.03943. doi:10.1016/j.corsci.2018.11.006.
- Kragh, Helge (1 December 2000), Max Planck: the reluctant revolutionary, PhysicsWorld.com
- Kragh, Helge (1999), Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, p. 62, ISBN 978-0-691-09552-3
- Planck, Max (2 June 1920), The Genesis and Present State of Development of the Quantum Theory (Nobel Lecture)
- Previous Solvay Conferences on Physics, International Solvay Institutes, archived from the original on 16 December 2008, retrieved 12 December 2008
- See, e.g., Arrhenius, Svante (10 December 1922), Presentation speech of the 1921 Nobel Prize for Physics
- Lenard, P. (1902), "Ueber die lichtelektrische Wirkung", Ann. Phys., 313 (5): 149–98, Bibcode:1902AnP...313..149L, doi:10.1002/andp.19023130510
- Einstein, Albert (1905), "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" (PDF), Ann. Phys., 17 (6): 132–48, Bibcode:1905AnP...322..132E, doi:10.1002/andp.19053220607
- Millikan, R. A. (1916), "A Direct Photoelectric Determination of Planck's h", Phys. Rev., 7 (3): 355–88, Bibcode:1916PhRv....7..355M, doi:10.1103/PhysRev.7.355
- Isaacson, Walter (2007-04-10), Einstein: His Life and Universe, ISBN 978-1-4165-3932-2, pp. 309–314.
- "The Nobel Prize in Physics 1921". Nobelprize.org. Retrieved 2014-04-23.
- Smith, Richard (1962), "Two Photon Photoelectric Effect", Physical Review, 128 (5): 2225, Bibcode:1962PhRv..128.2225S, doi:10.1103/PhysRev.128.2225.Smith, Richard (1963), "Two-Photon Photoelectric Effect", Physical Review, 130 (6): 2599, Bibcode:1963PhRv..130.2599S, doi:10.1103/PhysRev.130.2599.4.
- Bohr, Niels (1913), "On the Constitution of Atoms and Molecules", Phil. Mag., 6th Series, 26 (153): 1–25, doi:10.1080/14786441308634993
- Mohr, J. C.; Phillips, W. D. (2015). "Dimensionless Units in the SI". Metrologia. 52 (1): 40–47. arXiv:1409.2794. Bibcode:2015Metro..52...40M. doi:10.1088/0026-1394/52/1/40.
- Mills, I. M. (2016). "On the units radian and cycle for the quantity plane angle". Metrologia. 53 (3): 991–997. Bibcode:2016Metro..53..991M. doi:10.1088/0026-1394/53/3/991.
- Nature (2017) ‘’A Flaw in the SI system,’' Volume 548, Page 135
- Maxwell J.C. (1873) A Treatise on Electricity and Magnetism, Oxford University Press
- Giuseppe Morandi; F. Napoli; E. Ercolessi (2001), Statistical mechanics: an intermediate course, p. 84, ISBN 978-981-02-4477-4
- Einstein, Albert (2003), "Physics and Reality" (PDF), Daedalus, 132 (4): 24, doi:10.1162/001152603771338742, archived from the original (PDF) on 2012-04-15,
The question is first: How can one assign a discrete succession of energy value Hσ to a system specified in the sense of classical mechanics (the energy function is a given function of the coordinates qr and the corresponding momenta pr)? The Planck constant h relates the frequency Hσ/h to the energy values Hσ. It is therefore sufficient to give to the system a succession of discrete frequency values.
- 9th edition, SI BROCHURE. "BIPM" (PDF). BIPM.
- Materese, Robin (2018-05-14). "Kilogram: The Kibble Balance". NIST. Retrieved 2018-11-13.
- Quantum of Action and Quantum of Spin – Numericana
- Moriarty, Philip; Eaves, Laurence; Merrifield, Michael (2009). "h Planck's Constant". Sixty Symbols. Brady Haran for the University of Nottingham.
- A pdf file explaining the relation between h and ħ, their units, and the history of their introduction Link |
Genetic resources refer to genetic material in plants and animals that determine useful traits that can be conserved, characterized, evaluated and used by people to meet their needs. Recent advances in molecular biology, genetics, and applied science in crop and livestock breeding and fisheries have made the use of genetic resources widespread and more valuable. It has also been recognized that genetic diversity of crop and livestock varieties plays a key role in sustainable agricultural practices. Despite some controversy around the relationship between agricultural development and conservation of plant genetic diversity, there is a rich and diverse knowledge base about genetic resource conservation that indicates that loss of biodiversity can reduce food security and increase economic risk, threatening the viability and sustainability of many agricultural systems. More specific dangers of reduced biodiversity include: increased vulnerability to insect pests and diseases, negative effects on nutrition due to decline in the variety of foods, reduction in possibilities for adaptation and use for future generations, and loss of local knowledge about diversity , all of which can directly threaten the livelihood of rural communities not only in the present but for generations to come.
This workshop brought together researchers from various social and natural science disciplines who have been investigating issues surrounding local-level conservation of crop varieties and livestock species and identifying the factors associated with maintenance of biodiversity at the local level.
Many factors affect the conservation of biodiversity, including demographic changes, technological development, economic factors, and national agricultural policies. However these factors alone are not sufficient to explain observed overall trends in conservation, or to explain different patterns of conservation among communities subject to similar demographic, economic, and political conditions. To date, institutional aspects of local plant genetic conservation have largely been ignored, with the possible exception of formal institutional issues surrounding intellectual property rights (IPR), mostly applied to the developed country setting. In this workshop, we will address the dual, and often inter-related, roles of property rights and collective action for local-level genetic resource conservation in the developing country setting.
Main objectives of the workshop were to:
- Identify and discuss the various links between a wide array of property rights (to land, other natural resources and germplasm), collective action and local conservation of crop genetic resources, and their effects on rural people's livelihoods, as per the three broad themes discussed above.
- Identify and evaluate the various methods used to investigate the links between property rights, collective action and crop diversity conservation, drawing from diverse disciplines.
- Strengthen the understanding among participating CGIAR centers, NARs, and NGOs of how attention to collective action and property rights can assist in maintaining genetic diversity, with ideas on future priorities for research and action.
Papers presented at the conference are currently undergoing revision to be released as CAPRi Working Papers. The versions presented at the conference are accessible below as drafts not for citation, since revisions are expected.
Cradle of creativity: The case for in situ conservation of agro biodiversity and the role of traditional knowledge and IPRs
by Anil K. Gupta
Full Text (PDF 247K)
The conservation of agricultural biodiversity in Uzbekistan: The impacts of the land reform process
by Eric Van Dusen, Marina Lee, Evan Dennis, Jarilkasin Ilyasov and Sergey Treshkin
Full Text (PDF 177K)
Local governance of coral reef ecosystems: A pattern of local community in protecting marine biodiversity: Lessons from Gili Indah, Lombok, Indonesia
by Aceng Hidayat
Full Text (PDF 324K)
The community registry as an expression of farmers’ rights: Experiences in collective action against the plant variety protection act of the Philippines
by Alywin D. M. Arnejo
Full Text (PDF 143K)
The role of local institutions in the conservation of plant genetic diversity
by Evan Dennis, Jarilkasin Ilyasov, Eric Van Dusen, Sergey Treshkin and Marina Lee
Full Text (PDF 184K)
The dynamics of seed flow among maize growing small-scale farmers in the Central Valleys of Oaxaca, Mexico
by Lone B. Badstue, Mauricio R. Bellon, Julien Berthaud, Alejandro Ramírez, Dagoberto Flores and Xóchitl Juárez
Full Text (PDF 164K)
Local organizations involved in conserving crop genetic resources in Ethiopia and Kenya: What role for on-farm conservation?
by John Mburu adn Edilegnaw Walc
Full Text (PDF 244K)
A review of Ugandan national laws and policies that relate to plant genetic resources for food and agriculture (PGRFA)
by John Mulumba Wassva and Fiona Bayiga
Full Text (PDF 133K)
Institutional Innovations Towards Gender Equity in Agrobiodiversity Management: Collective Action in Kerala, India. Martina Aruna Padmanabhan. CAPRi Working Paper 39. Washington DC: IFPRI. 2005.
Facilitating Collective Action and Enhancing Local Knowledge: A Herbal Medicine Case Study in Talaandig Communities, Philippines.
Herlina Hartanto and Cecil Valmores. CAPRi Working Paper 50. Washington DC: IFPRI. 2006.
Local community participation in reversing trends of genetic erosion: The community seed bank approach from Ethiopia
by Bayush Tsegaye
Full Text (PDF 1024K)
The distribution of traditional knowledge about maize in indigenous Maya communities of highland Chiapas, Mexico
by Hugo Perales R., Bruce F. Benz, Teresa Santiago V. and Stephen B. Brush
Full Text (PDF 711K)
Farmers' Rights and Protection of Traditional Agricultural Knowledge. Stephen B. Brush. CAPRi Working Paper 36. Washington DC: IFPRI. 2005.
The Voracious Appetites of Public versus Private Property: A View of Intellectual Property and Biodiversity from Legal Pluralism. Melanie G. Wiber. CAPRi Working Paper 40. Washington DC: IFPRI. 2005.
Geographies of risk and difference in crop genetic engineering and agrobiodiversity conservation
by Kathleen McAfee
Full Text (PDF 244K)
Formal and informal systems in support of farmer management of agro-biodiversity: some policy challenges to consolidate lessons learned. Marie Byström. CAPRi Working Paper 31. Washington DC: IFPRI. 2004.
Property Rights and the Management of Animal Genetic Resources. Simon Anderson and Roberta Centonze. CAPRi Working Paper 48. Washington DC: IFPRI. 2006.
From the Conservation of Genetic Diversity to the Promotion of Quality Foodstuff: Can the French Model of ‘Appellation d’Origine Contrôlée’ be Exported?
Valérie Boisvert. CAPRi Working Paper 49. Washington DC: IFPRI. 2006.
The impacts of collective action and property rights on plant genetic resources
by Pablo Eyzaguirre and Evan Dennis
Full Text (PDF 155K) |
Often times electrical or electronic components are housed in sealed enclosures to prevent the ingress of water, dust or other contaminants. Because of the lack of ventilation in these enclosures all of the heat generated by the internal components must be dissipated through the walls of the enclosure via conduction then from the external surface of enclosure to the environment via radiation and natural convection as shown in figure 1.
Accurately calculating the temperature rise of each component housed inside the enclosure is a complicated task that is best accomplished using computational fluid dynamics and heat transfer software. However in many cases being able to estimate the average air temperature within the enclosure based on the dimensions and the material of the enclosure is sufficient information to allow you to develop a design that can be further refined through testing.
The path of the heat flow out of the enclosure is represented by the general thermal resistance network shown in figure 2. The four vertical walls of the enclosure can be considered a single wall by combining the areas of each wall. The top and bottom walls of the enclosure have to be evaluated separately since the heat transfer coefficient of each is different.
The heat must first be transferred from the air inside the enclosure to the internal surface of the enclosure walls. Rvi, Rti and Rbi represent the thermal resistance associated with the heat transfer to the internal vertical, top and bottom surfaces respectively. The internal thermal resistances are highly dependent on the number of heat sources, arrangement of the heat sources and location of internal structures that affect the air flow within the enclosure. Because of the variability of these parameters a highly accurate estimate of this thermal resistance using simple hand calculations is not possible. However by making some assumptions regarding the layout of the components within the enclosure a reasonable estimate of the internal thermal resistance can be calculated. The following assumptions will be made:
- The heat sources are evenly distributed throughout the enclosure
- The structures in the enclosure do not significantly obstruct the movement of air flow throughout the enclosure
If there are no fans or other type of forced convection within the enclosure the heat transfer from the internal air to the walls is via natural convection. Correlations used to calculate the heat transfer from horizontal and vertical plates will be used to estimate the heat transfer from the top/bottom surface and vertical surfaces respectively. The equations used to calculate the thermal resistance associated with the convective heat transfer from the air inside the enclosure to the walls of the enclosure are as follows:
Convection to Internal Bottom Surface
average air temperature inside the enclosure
average enclosure internal wall surface temperature
Convection to Internal Top Surface5
Convection to Internal Vertical Surfaces7
Equations 3,6 and 8 are developed in and are only applicable with air as the flow medium and laminar flow. For most electronic applications the flow within the enclosure will be laminar.
Radiation can account for a significant percentage of the heat transfer in situations involving natural convection as is the case with a sealed enclosure. The radiation heat transfer from the heat generating components to the internal walls of the enclosure is represented by Rradi. Since our simplified analysis allows us to estimate the internal air temperature only we will use that value to calculate Rradi. In most situations the internal air temperature will be lower than the component temperatures. In these cases the calculated heat transfer due to radiation internally will be understated. Since our goal is to provide an initial estimate of the performance of the enclosure that will be refined later in the design process this error in the internal radiation heat transfer is acceptable. Rradi is calculated using the following equations:
Radiation to Internal Surfaces
is the surface emissivity of the enclosure
Rcond is the conduction thermal resistance of the wall of the enclosure and is given by equation 13
is the thermal conductivity of the enclosure material
is the thickness of the enclosure material
The heat generated inside the enclosure is transferred to the surrounding atmosphere from the external surface of the enclosure via natural convection and radiation. As with convection to the internal surfaces the convection heat transfer from the bottom, top and vertical external surfaces will be evaluated separately. The same convection equations used for the internal surfaces will be used for the external surfaces with Ti-Tis replaced with Tes-Tamb where Tamb is the ambient external temperature and Tes is the external surface temperature of the enclosure.
The thermal resistance representing the radiation heat transfer from the external surface of the enclosure to the atmosphere is given by 14.
In order to determine the internal average air temperature you must first determine the surface temperature of the enclosure Tes. Tes cannot be solved for directly and will have to be determined using a numerical solver available in any mathematical software or using the “Goal Seek” function in Microsoft Excel. By performing an energy balance at the surface of the enclosure equation 16 is developed. The value of Tes is determined by finding a temperature that satisfies equation 16.
is the convection thermal resistance from the external bottom surface (reference equation 1 and substitute Ti-Tis for Tes-Tamb)
is the convection thermal resistance from the external top surface (reference equation 5 and substitute Ti-Tis for Tes-Tamb)
is the convection thermal resistance from the external vertical surfaces (reference equation 7 and substitute Ti-Tis for Tes-Tamb)
is the total heat generated by the internal components
With Tes known Tis can now be calculated using equation 18.
Note: The thickness of the enclosure walls are assumed to be sufficiently small such that the internal and external surface areas are approximately the same.
You are now finally able to calculate Ti using the internal thermal resistances and Tis. As with the calculation of the external wall temperature the internal average air temperature cannot be calculated directly. Ti is calculated numerically by finding a temperature value that satisfies the internal energy balance using equation 19.
R Simons, “Simplified Formula for Estimating Natural Convection Heat Transfer Coefficient on a Flat Plate”, in: Electronics Cooling, Issue: August 2001 |
What is atherosclerosis?
Atherosclerosis thickening or hardening of the arteries. It is caused by a buildup of plaque in the inner lining of an artery.
Plaque is made up of deposits of fatty substances, cholesterol, cellular waste products, calcium, and fibrin. As it builds up in the arteries, the artery walls become thickened and stiff.
Atherosclerosis is a slow, progressive disease that may start as early as childhood. However, it can progress rapidly.
What causes atherosclerosis?
It's not clear exactly how atherosclerosis starts or what causes it. However, a gradual buildup of plaque or thickening due to inflammation occurs on the inside of the walls of the artery. This reduces blood flow and oxygen supply to the vital body organs and extremities.
What are the risk factors for atherosclerosis?
Risk factors for atherosclerosis, include:
- High cholesterol and triglyceride levels
- High blood pressure
- Type 1 diabetes
- Physical inactivity
- High saturated fat diet
What are the symptoms of atherosclerosis?
Signs and symptoms of atherosclerosis may develop gradually, and may be few, as the plaque gradually builds up in the artery. Symptoms may also vary depending on the affected artery. However, when a major artery is blocked, signs and symptoms may be severe, such as those occurring with heart attack, stroke, or blood clot.
The symptoms of atherosclerosis may look like other heart conditions. See your healthcare provider for a diagnosis.
How is atherosclerosis diagnosed?
First, your doctor will do a complete medical history and physical exam. You may also have one or more of these tests:
- Cardiac catheterization. With this procedure, a long thin tube (catheter) is passed into the coronary arteries. X-rays are taken after a dye is injected into an artery to locate the narrowing, blockages, and other abnormalities of specific arteries.
- Doppler sonography. A special probe is used to direct sound waves into a blood vessel to evaluate blood flow. An audio receiver amplifies the sound of the blood moving though the vessel. Faintness or absence of sound may mean there is a blockage. This is used to identify narrowing of the blood vessels of the abdomen, neck, or legs.
- Blood pressure comparison. Comparing blood pressure measurements in the ankles and in the arms helps determine any constriction in blood flow. Significant differences may mean blood vessels are narrowed due to atherosclerosis.
- MUGA/radionuclide angiography. This is a nuclear scan to see how the heart wall moves and how much blood is expelled with each heartbeat, while the person is at rest.
- Thallium/myocardial perfusion scan. This is a nuclear scan given while the person is at rest or after exercise that may reveal areas of the heart muscle that are not getting enough blood.
- Computerized tomography or CT. This is a type of X-ray test that can see if there is coronary calcification that may suggest a future heart problem.
How is atherosclerosis treated?
Treatment for atherosclerosis may include lifestyle changes, medicine, and surgery.
You can change some risk factors for atherosclerosis such as smoking, high cholesterol levels, high blood sugar (glucose) levels, lack of exercise, poor dietary habits, and high blood pressure.
Medicines that may be used to treat atherosclerosis include:
- Antiplatelet medicines. These are medicines used to decrease the ability of platelets in the blood to stick together and cause clots. Aspirin, clopidogrel, ticlopidine, and dipyridamole are examples of antiplatelet medicines.
- Anticoagulants. Also called blood thinners, these medicines work differently from antiplatelet medicines to decrease the ability of the blood to clot. Warfarin and heparin are examples of anticoagulants.
- Cholesterol-lowering medicines. These are medicines used to lower fats (lipids) in the blood, particularly low density lipid (LDL) cholesterol. Statins are a group of cholesterol-lowering medicines. They include simvastatin, atorvastatin, and pravastatin among others. Bile acid sequestrants—colesevelam, cholestyramine and colestipol—and nicotinic acid are other types of medicine that may be used to reduce cholesterol levels. Your doctor may also prescribe fibrates to help improve your cholesterol and triglyceride levels.
- Blood pressure medicines. Several different groups of medicines act in different ways to lower blood pressure.
With this procedure, a long thin tube (catheter) is thread through a blood vessel to the heart. There, a balloon is inflated to create a bigger opening in the vessel to increase blood flow. Although angioplasty is done in other blood vessels elsewhere in the body, percutaneous coronary intervention (PCI) refers to angioplasty in the coronary arteries to permit more blood flow into the heart. There are several types of PCI procedures, including:
- Balloon angioplasty. A small balloon is inflated inside the blocked artery to open the blocked area.
- Atherectomy. The blocked area inside the artery is shaved away by a tiny device on the end of a catheter.
- Laser angioplasty. A laser used to vaporize the blockage in the artery.
- Coronary artery stent. A tiny mesh coil is expanded inside the blocked artery to open the blocked area and is left in place to keep the artery open.
Coronary artery bypass
Most commonly referred to as bypass surgery, this surgery is often done in people who have angina (chest pain) due to coronary artery disease (where plaque has built up in the arteries). During the surgery, a bypass is created by grafting a piece of a healthy vein from elsewhere in the body and attaching it above and below the blocked area of a coronary artery. This lets blood flow around the blockage. Veins are usually taken from the leg or from the chest wall. Sometimes more than one artery needs to be bypassed during the same surgery.
What are the complications of atherosclerosis?
Plaque buildup inside the arteries reduces the blood flow. A heart attack may occur if the blood supply is reduced to the heart. A damaged heart muscle may not pump as well and can lead to heart failure. A stroke may occur if the blood supply is cut off to the brain. Severe pain and tissue death may occur if the blood supply is reduced to the arms and legs.
Can atherosclerosis be prevented?
You can prevent or delay atherosclerosis by reducing risk factors. This includes adopting a healthy lifestyle. A healthy diet, losing weight, being physically active, and not smoking can help reduce your risk of atherosclerosis. A healthy diet includes fruits, vegetables, whole grains, lean meats, skinless chicken, seafood, and fat-free or low-fat dairy products. A healthy diet also limits sodium, refined sugars and grains, and solid fats.
If you are at risk for atherosclerosis because of family history, or high cholesterol, it is important that you take medicines as directed by your healthcare provider.
When should I call my healthcare provider?
If your symptoms get worse or you have new symptoms, let your healthcare provider know.
Key points of atherosclerosis
- Atherosclerosis is thickening or hardening of the arteries caused by a buildup of plaque in the inner lining of an artery.
- Risk factors may include high cholesterol and triglyceride levels, high blood pressure, smoking, diabetes, obesity, physical activity, and eating saturated fats.
- Atherosclerosis can cause a heart attack, stroke, aneurysm, or blood clot.
- You may need medicine, treatments, or surgery to reduce the complications of atherosclerosis.
Tips to help you get the most from a visit to your healthcare provider:
- Know the reason for your visit and what you want to happen.
- Before your visit, write down questions you want answered.
- Bring someone with you to help you ask questions and remember what your provider tells you.
- At the visit, write down the name of a new diagnosis, and any new medicines, treatments, or tests. Also write down any new instructions your provider gives you.
- Know why a new medicine or treatment is prescribed, and how it will help you. Also know what the side effects are.
- Ask if your condition can be treated in other ways.
- Know why a test or procedure is recommended and what the results could mean.
- Know what to expect if you do not take the medicine or have the test or procedure.
- If you have a follow-up appointment, write down the date, time, and purpose for that visit.
- Know how you can contact your provider if you have questions.
Online Medical Reviewer:
Cunningham, Louise, RN
Online Medical Reviewer:
Snyder, Mandy, APRN
Date Last Reviewed:
© 2000-2020 The StayWell Company, LLC. 800 Township Line Road, Yardley, PA 19067. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions. |
i *= 2*i + i++;
Jave evaluates expressions from left to right while respecting operator precedence. The first expression encountered is the compound assignment operator, *=. Since E1 op= E2 is the same as E1 = (T)((E1) op (E2)) we can expand the original expression as follows.
i = (int)((i) * (2*i + i++));
Now the first operator encountered on the left is the simple assignment operator, =. The left operand is the variable i and the right operand is the entire expression ((i) + (2*i + i++)), because the simple assignment operator has lower precedence that all of the operators that follow it on the right.
Java will first evaluate the left hand operand of the first multiplication operator, so we can write the value 1 in place of i.
i = (int)((1) * (2*i + i++));
Java will see that the right hand operand of the first multiplication operator is the expression, (2*i + i++), so Java will evaluate the entire right hand operand. The evaluation of the right hand operand begins by evaluating the expresion 2*i. The result is as follows.
(2 + i++)
The next operand is the addition operator. The left operand is the value 2 and the right operand is the postfix expression i++. Both the left and the right operands are evaluated before the addition operation is complete.
(2 + 1)
At this point, the value of i is 2; because i was incremented by the postfix expression.
The result of the above expression is 3, so our original statement can be simplified as follows.
i = (int)((1) * (3)); // At this point, i = 2.
The above can be simplified as follows.
i = (int)(3);
Prior to the evaluation of the simple assignment expression, the value of i is 2. After the evaluation of the assignment expression the value of i will be 3 and the old value will be lost.
Please note that the above process demonstrates what Java does, but it is not the simplest way for people to evaluate expressions. For us, it is a little easier to first go through the entire expression and evaluate the postfix and prefix expression first. On a second pass through the expression from left to right we can then work through the other operators. I didn't demonstrate that approach above, because I wanted to demonstrate when the postfix expression really does increment the value of i.
Please also note that the real exam does not focus on operator precedence. The real exam assumes that programmers use parenthesis to control the order of evaluation. |
History and Culture
Their traditional home in the mountain valley which held their villages of Cupa (now Warner’s Hot Springs) and Wilakalpa was at the junction of three major California groups: the Cahuilla, the Luiseño, and the Diegueño or Kumeyaay. The Cupeño, Cahuilla, and Luiseño all speak Cupan languages, a sub-group of the Uto-Aztecan family of American Indian languages. Within Cupan, Cupeño and Cahuilla are most closely related.
Cupeño tradition, as related in the history of Kisily Pewik, maintains that the founders of the tribe were a lineage of the Mountain Cahuilla who had moved south from the area around Soboba. Evidence from an examination of the language and social organization of the groups suggests that the tradition is correct, and that these events happened eight hundred to a thousand years ago.
Once a established in their new villages, the Cupeño began founding a new tradition rooted in the Cahuilla tradition but changed by an intricate interaction with the peoples around them. They intermarried widely, and many Cupeño belong to lineages that claim to have Luiseño, Diegueño, and Cahuilla ancestry.
From the Cahuilla they maintained the complex social organization of exogamous moieties, patrilineal clans, and ceremonial exchange “parties.” From the Luiseño they acquired some of the rituals of the Chinigchinich religion, a religion of moral and spiritual rigor based on self-discipline, a concept of an ethical life and ecstatic visions of the nature of the world. This was added to the older complex of funerary rituals and the eagle ceremony.
They intermarried, exchanged ceremonies, and fought with the Hokan-speaking Diegueño to the south. This new world of the Cupeño was constructed in intimate contact with their lands, which were small but rich enough in natural resources and beauty to sustain a complex way of life for the villages.
When the Spanish, Mexican, and American settlers came to California, the Cupeño experience of being at a cultural crossroads was further complicated. The first whites came to their valley around 1795, and the town of Cupa and its hot springs became a natural way station on any route from the south or east through the Arizona desert to the coast of California.
The Mission San Luis Rey and later Mission San Diego maintained outstations at Cupa until the secularization of the missions in 1834, and the Cupeño became Christians early in the nineteenth century, while still maintaining most of their traditional religion. They also learned agriculture, and from that time have been farmers as well as hunters and gatherers.
In 1840 the governor of California granted Cupeño lands “without prejudice to the indigenes” to a Mexican citizen, Jose Antonio Pico, who wrote a statement dated August 9, 1840, that “the indigenes cede to me all the rights with which they are invested, solely because I place my residence by their side, in order to cooperate in the care of the few interests which they have for their subsistence. They ask through me for their emancipation, so that they may be able to take up with freedom their labors for the support and benefit of their families.”
Pico’s efforts to establish himself apparently failed, and in 1844 the ranch was granted to Juan Jose Warner, an American who had Hispanicized his name. Warner’s grant from Governor Alvarado unfortunately did not mention the Indians, and referred to the land as “vacant and abandoned,” evidently in reference to the buildings which had been built by the Indians under the supervision of Franciscan fathers from Mission San Luis Rey.
The terms of Warner’s grant were to prove fatal for the Cupeños’ rights to the land. Many Indians were employed on Warner’s ranch, but they were not placid. In 1851, the chief of the Kavaly lineage, Antonio Garra, led a revolt against Warner’s oppressive regime. This revolt became known as the Garra Uprising. It was one of many efforts by California Indians to repel the Americans, or at least to convince them that the Indians were capable of defending their rights in the land.
Inevitably, the revolt was put down, and Garra and many of his followers were executed. The Cupeño village was burned in retaliation for the burning of Warner’s buildings, and after that time the Indians lived in the buildings abandoned by the missions.
Perhaps discouraged by the revolt, Warner abandoned his properties, but only after he had cleared his title in the courts. Eventually the land became the property of John G. Downey, and in 1893 his family sued for the removal of the Indian “interlopers.” The Cupeños fought the eviction all the way to the United States Supreme Court, but the court ruled in favor of the Downey family on May 13, 1901.
The Cupeño were finally moved to a small reservation at the Luiseño village of Pala in May of 1903. There were no houses there for the new residents, and the Cupeños remember sleeping in the open, tortured by the insects and dampness of the unfamiliar coastal valley. Furthermore, the new reservation lacked the rich religious associations and traditions of clan ownership that the old lands had held. For a people who had invested so much spiritual importance in their relationship to the land, the removal was devastating.
Over a century later, the Cupeño continue to mourn their loss. However, like the orphaned Kisily Pewik who returned and claimed the lands of his father, the Cupeños are continuing to embrace their traditional cultural heritage through events such as the annual Cupa Days celebration, and through classes, activities, and research at the Cupa Cultural Center. |
Logon to your Sumdog account and practice your spellings.
Today, in the English lesson we are going to do a bit of Science! The weather elf wants to make an umbrella. Can you help the elf figure out which material would be the most suitable? It will need to be waterproof and strong. There are some suggestions below, but you could choose the materials that you would like to test. When you have collected the materials, you need to decide how you can test them. You will need to do two tests. One test to test how strong it is and one test to see if it is waterproof. Remember you need to do exactly the same thing to each material. When you are done, decide which material is the best to make an umbrella. After that, can you write a set of instructions to explain to the weather elf how to do the science experiment. Remember to use time conjunctions, bossy (imperative verbs) and commas in a list.
Logon to your Numbot account.
Today we are going to practise dividing by 2, 5 and 10. Remember that you can use practical objects or times tables to help you. Look carefully at the question to check if you are dividing by 2, 5, 10.
Read and complete the questions.
Click on the link below to listen to different types of music all about space.
Have a go at making your own optical illusion. In the example below they use an old CD. You could try using card or cardboard. |
You can find examples of how to write an autobiography in the stories of sports figures, great religious leaders, government officials, doctors, railroad workers, singers and actors, along with ordinary people who found meaning in their lives. Choose a category or person that inspires you, and read several examples of how great life stories are shared with the public.
The inciting incident is the pivotal moment in your story, where you realized your desire line. It could be a seemingly small moment, such as a brief fight with your mother, that becomes a major moment or inciting incident in your story. For example, your brief fight with your mother could be the last time you speak to her before she passes away and leaves you letters about her life in Poland. Think of the ah ha moment in your story when you realized what you wanted in your life, or where you realized you were wrong about your assumptions about a specific moment or event.
Writing an autobiography for a high school or a college English class can help you gain a deeper sense of personal identity. An autobiography allows you to tell your story in a way that reveals truths about your values, goals and dreams. Though there's no exact science to writing an autobiography, you should include information about your background, major events that shaped who you are and any core themes in your life. Include specific examples to help readers understand your life history. |
Functional Behavior Assessment
Functional Behavior Assessment (FBA) is a method for assembly of information that can be used to maximize the success and efficiency of a behavior support plan. When a student’s actions and behavior disrupts classroom lessons and activities, teachers often deal with the problem by manipulating procedures that pursue the misbehavior, such as detention, suspension, and verbal reprimands. Studies has shown that this method has failed to teach the student satisfactory alternate behaviors. The student may respond to the consequences for the moment, but in many instances, what has been taught is confusion and frustration.
The logic behind conducting a Functional Behavior Assessment is to understand why, where, and how the behaviors problems occurred. If we learn and study about the process of the behaviors and know where and when they are possible to occur, we can make arrangements of positive strategies to teach new behaviors. Students learn to misbehave or behave in habits that suit a need or that have an end result in a preferred outcome. Students will alter their behavior only when it is apparent that a different reaction will more efficiently and effectively result in a desired outcome.
The Assessment process is to address the issues and problems of the behavior, which may involve several methods of collecting the information. There are three strategies of collecting the information for the Assessment Behavioral Process. There are interviews with significant persons, systematic manipulations of environmental conditions structural or functional analyses, and direct observation of behavior. As denoted by O’Neill et al., (2014). There are five main outcomes of the Functional Behavior Assessment. The first main outcome describes the behavior. The second one defines potential ecological and setting events. The third defines the immediate antecedent events for occurrences and non-occurrences of the problem behavior. The fourth identify the consequences or outcomes of the undesirable behaviors that may be maintaining them. The final process of the interview defines the efficiency of the undesirable behaviors.
Prior to implementation of the process, my experience built from completing and evaluating the work book was learning the behavior pattern of the child. Predicament for the behaviors, having already being known, we shall evaluate the behavior pattern at the end of the intervention to establish whether, any change will occur. The behavior of the student will then be evaluated after the process has been through. Post intervention information will be obtained from home and school compounds on the behavior exhibited by the student after the events which triggers behavior are provoked to them.
Behavior development is an activity which needs time. As denoted by O’Neill et al., (2014), behaviors are acquired over a range of time. All the strategies will be aim at increasing affection and attachment to friends, parents and even teachers.
Positive results of the plan should be no reported cases of screaming and yelling behaviors with emotional attachment to peers, teachers and parents. The child will be able to accept instruction from other without feeling agitated. If successful results are yielded, then a follow up mechanism will be laid on. Overall the Functional Behavior Assessment is a great tool to maximize the success and efficiency of a behavior support plan.
O’Neill, R., & Albin, R. (2014). Functional assessment and program development for problem behavior: A practical handbook (3rd ed.).
Click following link to download this document
Functional Behavior Assessment.docx |
This post is the final post in a three-part series based on my learning from the book Reading Nonfiction as well as recent PD that I attended that was led by Kylene Beers and Bob Probst. In the first post, “How do we take them further?” I talked about those 3 questions that should guide our thinking when reading nonfiction:
- What surprised me?
- What did the author think I already knew?
- What changed, challenged, or confirmed what I knew?
When we get our students to think about these three questions as they are reading nonfiction, they will notice more, question more, and dig deeper into the text.
In last week’s post, “Defining Nonfiction” I wrote about how we define nonfiction. I first shared a word cloud based on our own definitions of nonfiction reading:
But we then transitioned to a much deeper definition of nonfiction:
Nonfiction is that body of work in which the author purports to tell us about the real world, a real experience, a real person, an idea, or a belief.
To be able to truly dig deeply into a nonfiction text, we must understand the author’s purpose. In this post I’m going to be sharing with you a couple of the signposts that Beers and Probst recommended as a starting point to really get our students thinking about the author’s purpose.
The first signpost that Beers and Probst shared with us specifically during our PD was the concept of Contrasts and Contradictions. For those of you who have been using the Notice and Note to teach fiction reading strategies, this one should sound familiar. In fiction you look for things that the characters do that contrasts or contradicts what you might expect. In nonfiction we should notice if the author shows us “a difference between what you know and what is happening in the text, or a difference between two or more things in the text.”
Think about it for a second. If you are reading a news story, and it contradicts something that you have seen in a different story, or something that you believe you already know, that is going to give you pause. When you stop to think about those differences, you might come to the conclusion that an author is trying to change your opinion – this is a hint of what the author’s purpose might be. Remember, our students can’t just think of nonfiction as not fake. Our students have to have that questioning stance so that they can be a bit skeptical of the opinions being shared.
Once we recognize the signpost for contrasts and contradictions, students then need to take it a step further – just noticing the signpost doesn’t get the level of inquiry we want. Next we need our students to ask themselves a question about that signpost. I love the chart on page 121 of Reading Nonfiction because it shows anchor questions for different levels of students, or questions that could be asked in the content areas. The most basic anchor question for this signpost would simply be “What does this make me wonder about?” while a deeper anchor question might be “What is the difference and why does it matter?”
The other signpost that Beers and Probst said was so important in finding the author’s purpose was Extreme or Absolute Language. This is defined as language that “leaves no doubt about a situation or an event, allows no compromise, or seems to exaggerate or overstate a case.” Virtually any statement that includes the words all or none would be an example. It seems this year that you can’t listen to a political speech or read an article about the presidential race that doesn’t include some form of extreme or absolute language. The extreme language can range from obvious and probably harmless to the subtle and potentially dangerous. Take the following 3 statements that many of us may have heard at some point:
- It’s freezing out there!
- You have to let me go to that party! Everyone is going to be there!
- Simply stated, we know that Saddam Hussein has weapons of mass destruction.
As you can see, the first statement is probably pretty harmless, the second might give the parent of a teenager pause to think about whether or not it is appropriate for their child to attend the party, but that last one is an example that led to the loss of many lives and history has come to show us it was not accurate. We need our students to understand that when they encounter language that is extreme or absolute, they need to “be alerted either to the strength of the author’s feelings or to the possibility that the writer is exaggerating and may even be deceiving or misleading the reader.”
Just like with any other signpost, simply noticing it is not enough. We need to continue to remind our students to stop and think about the anchor question. Just like with the contrasts and contradictions signpost, the most basic anchor question is “What does this make me wonder about?” while a deeper version could be “Why did the author use this language?” Again, you can see content specific anchor questions on the chart on page 121.
Between teaching our students about the importance of a questioning stance when reading nonfiction, a true and accurate definition of nonfiction, and at least a couple of the signposts, our students will have the tools they need to be able to read deeply, think deeply, and understand an author’s purpose. It is so important for our students to develop these skills not only so that they will be successful in school, but also so that they can be productive members of a democratic society. If we don’t teach our students to have a questioning stance, they will believe whatever they see on the news, no matter whether it is ABC, CBS, NBC, CNN, Fox News, MSNBC, or any of the other multitude of news outlets that are out there. I love this following quote from page 32 in Reading Nonfiction:
“Far more important than the ability to capture a teacher’s information and thoughts is the ability to acquire information on ones’ own, to test ideas against one another, and to decide for one’s self what notions have merit and which should be rejected or abandoned.”
We need thinkers who can listen to political speeches and read political writings and decide who will best serve their needs. We need students who can look at the writings of a so-called nonprofit and decipher if a donation will be used in a meaningful way. Instead of accepting what they are told, our students “need to develop intellectual standards that open them up to new possibilities and challenging ideas and give them the courage and resilience to change their minds when they see persuasive reasons to do so.”
Share with us your thoughts on the importance of nonfiction reading. Why do you feel understanding nonfiction is important for our students? What have you noticed about student’s thinking as you push out a questioning stance and the nonfiction strategies? Let us know about them in the comments below! |
What is the ADA?
“The Americans with Disabilities Act (ADA) is a federal civil rights law that gives protections to individuals with disabilities similar to those provided to individuals on the basis of race, color, sex, national origin, age, and religion. It guarantees equal opportunity for individuals with disabilities in public accommodations, employment, transportation, State and local government services, and telecommunications.”
Essentially the ADA provides the public with protection against discrimination, and provides equal employment opportunities.
How does the ADA define a disability?
Under the ADA a person with a disability is described as someone who has a physical or mental impairment that substantially limits one or more major life activities, has a record of such impairment, or is regarded as having such impairment.
2011 – New Regulations:
In the past it has been questionable if celiac disease, gluten intolerance and other disorders were considered disabilities under the ADA. In March of 2011 a new set of guidelines were put in place that expanded the definition of disability:
“The ADAAA expanded the definition of disability by introducing a new, non-exhaustive list of major life activities that include: caring for oneself, performing manual tasks, seeing, hearing, eating, sleeping, walking, standing, lifting, bending, speaking, breathing, learning, reading, concentrating, thinking, communicating, and working. Also, for the first time, the ADAAA has stated that major life activities will include the operation of major bodily functions, including but not limited to functions of the immune system; normal cell growth; and digestive, bowel, bladder, neurological, brain, respiratory, circulatory, endocrine and reproductive functions.”
Celiac disease is also considered an “invisible disability” under the ADA. “Invisible disabilities” is an umbrella term that captures a whole spectrum of hidden disabilities or challenges. Celiac Disease, Food allergies, and other intolerances are all considered invisible disabilities.
Family Medical Leave Act:
Celiac disease is considered a “chronic and serious” health condition, and therefore it is covered under the family medical leave act. What does this mean? Essentially this means that with a doctors note there are a specific set of rules that prevent a celiac patient from losing their job if an extended period of time is need off for celiac related reasons. If you live in Oregon, click this link for more information: http://arcweb.sos.state.or.us/rules/OARS_800/OAR_839/839_009.html
Students With Celiac Disease:
School can be especially difficult for anyone with celiac disease. Luckily, students with disabilities are covered under Section 504 of the Rehabilitation act of 1973.
What is Section 504?
“Section 504 of the rehabilitation act of 1973, a federal civil rights statute, is designed to prohibit discrimination on the basis of a disability in an educational program or institution. This prohibition extends to any educational institution accepting federal funds. Students with disabilities under this act are afforded accommodations and modifications to their educational program to ensure equal access.”
Essentially all public schools and any federally funded programs must provide equal access to all programs and services as those who are not disabled.
What about colleges?
Colleges are required to abide by section 504. The section states that if a school accepts federal funding then they must abide by the 504 act, and therefore must make any necessary accommodations. Which in turn means that schools (even colleges) must provide equal programs and services to all students.
Currently there are colleges across the nation that have began to accommodate gluten free needs. Here in the Willamette Valley both Oregon State and the University of Oregon have implemented gluten free menus.
How do I file for a 504 plan?
Documentation requirements vary by state however usually school officials will require proof of diagnosis, explanation of how celiac disease affects diet, and how it may adversely affect a person in an educational setting.
There are several resources on the web that give detailed instructions on developing a 504 plan that works for you or your child:
If further assistance or information is needed on this topic you can contact the Americans With Disabilities Act for more information. The ADA may provide you with a caseworker that can help you with your specific situation. |
Knowledge over time
We know that students do not acquire new knowledge in one lesson. We also know that students learn the most effectively when there is a balance between learning surface and deep knowledge and a balance between new content and deliberate practice of learnt material.
This is the knowledge we use when approaching curriculum planning and delivery. Lessons are designed in a sequence with opportunities to regularly review. Underpinning our lesson planning is the view that; Student progress is knowing more and remembering more.
In order to aid student memory and understanding we deploy the below in our teaching.
- Spaced rather than massed practice – Spacing out study sessions for particular units rather than trying to “cram” information.
- Interleaving – Related to spaced practice we incorporate interleaving whereby students are reminded of previous learnt material whilst acquiring new information.
- Testing – We use testing, especially low stakes and regular testing of key knowledge to “interrupt the forgetting” and to aid retention of new knowledge.
- Deliberate Practice – Giving students plenty of time and opportunities to practice new knowledge allowing an element of “over-learning” so that recall and application of new knowledge becomes second nature.
- Feedback and Reviewing – Through written and verbal feedback along with regular chances for students to review their work we help student identify misconceptions and improve on their previous understanding.
These are techniques which have been proven to have a positive impact on students being able to retain and understand new learnt material. In addition these techniques foster students abilities to “think about their thinking” and to act meta-cognitively. |
Opener: Philosophy Phriday! The term "mob mentality" refers to the changes in the way people behave when they are in large groups (versus alone or in smaller groups). It is a negative term. An example of "mob mentality" is when people trample other people to death during sales at big-box stores -- in such cases, none of the people who did the trampling were particularly murderous people, but they behaved differently in a crowd. Why do you think people's behavior changes so drastically in a crowd? What do you think people can do to help stop this kind of behavior?
Handouts in Class: Lord of the Flies Mid-Unit Mini-Essay, Essay Outline
Work Assigned and Collected:
a. You got the mini-essay assignment
b. I reviewed how to organize an essay
c. You handed in your openers
1. Read chapters 9-10 by Monday, 9/12 |
- A-Z Animals
Star nosed Moles are the only member in the Condylurini tribe and the Condyluraone genus. These moles are among the most interesting creatures in existence with some unique qualities and abilities. These virtually blind mammals have earned their name from their star- shaped nose which works as a sensory organ.
Find out here the description of this amazing species.
Color: The Star Nosed Moles have a blackish brown appearance.
Size: They can grow between 15 and 20 cm in length.
Weight: The average weight of an adult Star Nosed Mole is 55 gm.
Body: Their body is covered in black-brown water-repellant fur. They have a long tail.
Legs: These moles have four large legs covered in scales.
Head: They have a pair of beady eyes that are not developed enough to see making these mammals nearly blind. They also have 44 teeth.
Nose: The most unique feature in the appearance of these mammals is their nose which is larger than other mole species. They have 11 pairs of fleshy pink tentacles at the end of their snout that make the nose look like a star.
The main distribution range of these animals extends to the north-eastern US and eastern Canada. They are found in places like Labrador, Quebec, Minnesota, Indiana and South Dakota. These mammals can also be found along the Atlantic Coast and in the Appalachian Mountain area. The Atlantic Coast range extends to south-eastern Georgia.
They are semi-aquatic animals preferring low wet areas. The tunnels of these moles often lead below water surface. They can be found in wet meadows, marshes, banks of streams, lakes and ponds.
These small mammals feed on various small insects and fishes including ants, worms, beetles, mollusks and snails.
These mammals have a very interesting behavior pattern:
The nose is the principal sensory organ of the Star Nosed Moles as they cannot see. The tentacles covering the edges of the nose contain 25,000 highly sensitive touch receptors (Eimer’s Organs). German zoologist Theodor Eimer was the first person to observe these receptors in the European moles. Other species of moles also have the Eimer’s Organs, but in fewer numbers.
These completely hairless tentacles help them in hunting. They identify insects, invertebrates or any other consumable substance around them by the help of these tentacles. They hunt by touching their prey. It takes them very little time to decide if a substance is edible. The whole hunting process takes an average 230 ms (millisecond- a thousandth of a second) to complete once they find their prey.
The breeding season starts in mid-March and continues through April. They are known to reproduce once every year, but the females may reproduce a second time if their first litter is unsuccessful. One litter may contain 2- 7 offspring.
These mammals are born with closed eyes, ears and folded tentacles. These organs become functional after two weeks. The young creatures become independent 30 days after their birth. It takes 10 months for a young mole to reach full maturity.
The exact lifespan of the members of this species is unknown. But they live approximately 3-4 years in the wild.
Many raptors, mammals and reptiles including hawk, owls, skunks, weasels, minks and snakes prey on these moles.
Their noses are their best adaptive feature that helps them to survive in wild:
They are given the ‘Least Concern’ status by the International Union for Conservation of Nature and Natural Resources (IUCN). It means that there are no immediate threats to their existence.
Read on to know some amazing facts about these small animals:
Here are some images of these mammals:
These moles are one of the strangest looking creatures in the world. But in some ways, they are more capable than even a human. They may look ugly to some people, but they are undoubtedly one of the most interesting creatures in the world. |
You are here
Language Access in Clear Communication
- The emerging, growing field — and movement — of cultural respect is closely linked to the effort to reduce health disparities, differences between groups of people that may negatively affect individual access to quality health care. As a strategy, cultural respect is designed to improve quality and eliminate disparities in health care
- Health literacy — the ability to understand and communicate health information — is dependent on culture, context, knowledge, key skills, and many other factors. Developing health information at the appropriate literacy level and targeted to the language and cultural norms of specific populations helps promote health literacy.
- Plain language — clear, concise and well organized writing — is one strategy for developing and communicating health information. Plain language makes it easier to understand and use health information.
The NIH Clear Communication program serves to promote meaningful access to high-quality care for the broad public spectrum, taking into account patient values, beliefs, and behaviors and social, cultural, and the linguistic needs of for culturally diverse patients. One core, common thread among these concepts and strategies is patient centeredness, including communications in the language with which a patient feels most comfortable, especially when discussing or reading medical or health care information. This is called the individual's preferred language.
Language Access at NIH
Language can be a clear, profound barrier to health literacy. Language barriers and the inability to read or understand health information can pose serious health risks to individuals with limited English proficiency (LEP). Language is therefore a critical component of any effort to improve communication and access to quality healthcare for patients, their family members, caregivers, and friends.
Challenges to removing language barriers include the following:
- Often, there is no right or wrong in translating certain concepts and words;
- Some words and ideas, especially complex or technical ones, may defy simple translation, making comprehension difficult;
- There is great diversity and variation in the language skills and abilities of individuals, including translators and interpreters; and
- Context — geographic and cultural, for example — is often the most important component in health communication.
To improve access for individuals with limited English proficiency, the NIH has formulated — and is implementing — an agency-wide Language Access Plan (LAP). The goal of the plan is to improve access for eligible LEP persons to many of the agency’s public programs and activities. The focus of the LAP plan is to provide for communications in the preferred language when a patient has limited English proficiency.
Language Access is integral to the NIH’s commitment to the development of accessible and effective health, science, and medical information for broad public dissemination.
The NIH Language Access Program is coordinated by the Office of Equity, Diversity, and Inclusion. For more information, please visit www.edi.nih.gov/consulting/language-access-program/about.
Trans-Government Language Access Planning
On August 11, 2000, the President signed Executive Order 13166, "Improving Access to Services for Persons with Limited English Proficiency." The Executive Order requires Federal agencies to examine the services they provide, identify any need for services to those with limited English proficiency (LEP), and develop and implement a system to provide those services so LEP persons can have meaningful access to them.
Information about government-wide language access planning and programs, including Executive Order 13166, Title VI of the Civil Rights Act of 1964 (Title VI), and Title VI regulations regarding language access, is online at www.lep.gov/faqs/faqs.html.
The Language Access Plan of the U.S. Department of Health and Human Services is online at www.hhs.gov/sites/default/files/open/pres-actions/2013-hhs-language-access-plan.pdf.
Translation and Interpretation Contract
The National Institutes of Health offers a multiple award Translation and Interpretation Contract, designed to help the agency provide meaningful access to individuals who do not speak English as their primary language. Available task areas include translation of written materials; oral language assistance; and translation of digital information and web content. Contractors are:
- Ad Astra HHSN263201700005I
- CommGap HHSN263201700007I
- Kramer Translation HHSN263201700004I
- TransGlobal HHSN263201700006I
This page last reviewed on December 1, 2017 |
How can 3D printing be utilized to better understand chemistry? The concept of creating molecular models to serve as instructional aids has certainly been around a long time. Students of organic chemistry are often encouraged to purchase molecular modeling kits that can be used to build geometric renderings of various chemical structures. However, traditional ball-and-stick kits are limited with regard to how accurately the components can be pieced together to depict the true geometric form of a molecule. Perhaps there is a better way.
Low-cost 3D printing represents a powerful new tool that can be used by science educators and their students to create realistic, tangible models of chemical structures. 3D-printed molecular models are capable of doing far more than illustrate the atoms and bonds that make up a molecule. The models can often depict subtleties about the chemical structure that are difficult to discern using traditional modeling kits. 3D-printed models can even inform us about chemical reaction pathways—illustrating, for example, how different chemical entities fit together and interact in a three-dimensional fashion.
Students in the Department of Chemistry and Biochemistry at Stetson University, in collaboration with chemistry faculty mentors, have made use of a combination of software tools, web-resources, and 3D printing to create several different types of chemical models. Simple ball-and-stick models of common chemical structures have been constructed. More realistic, space-filling models of organic compounds, crystal structures, proteins, and other molecular complexes have also been fabricated. Originally, 3D-printed molecular models were created by students as part of independent study and senior research experiences. More recently, instructors have incorporated 3D printing activities as part of the required chemistry curriculum. 3D printing activities of this type can often make difficult to learn chemistry concepts more accessible to students and can greatly enhanced enthusiasm and motivation for learning abstract material. |
The Great Lakes’ water levels hit a record low this winter. According to hydrologist Drew Gronewold of the National Oceanic and Atmospheric Administration, “Water levels on Lake Superior, Michigan and Huron are and have been for the past 15 years below their long-term average.” He noted that the water levels before the 1990s followed changes in precipitation, but then a shift in evaporation rates over the lakes may have caused changes in surface temperature and ice cover.
These changes in water levels, along a variety of other climate-related impacts to our nation’s energy system recently outlined in a report by the Department of Energy (DOE), are causing a variety of problems for Midwestern power plants.
Michigan’s Cloverland Electric Cooperative hydropower plant complained that low levels were allowing too much air into the system, causing a malfunction in turbine efficiency. The Director of Generation, Phil Schmitigal, explained that the air leaking into the tubes reduces head pressure and therefore power output.
Not only hydropower plants are suffering, though. Plants using nuclear energy, coal, or gas to boil water into steam for generators use additional water to cool the steam for reuse – although the height of the water is relatively concerning for these plants, the bigger concern lies in the temperature. As water levels lower, they tend to increase in temperature as well, causing these types of power plants to collect warmer water rather than cooler – potentially decreasing efficiency, lowering output, raising costs, and damaging the environment due to hotter discharge.
The DOE’s report states current efforts may not be enough to reverse these climate conditions that are already affecting energy production and delivery in an aging and stressed U.S. energy system – and they are expected to increase. Increased investment in innovative energy technologies, improved efficiency, and reduced water intensity for power generation is encouraged.
Source: Low Great Lakes levels raise concerns for Midwest power plants, Midwest Energy News
Related: Climate change threatens energy breakdowns, Iowa Energy Center |
In the many hypotheses surrounding autism, one posits it is the consequence of abnormal cell communication.
Researchers at the U.C. San Diego recently did a study using a drug from 1916, suramin, which was approved for treating sleeping sickness. The findings in Translational Psychiatry were that it
restored normal cellular signaling in a mouse model of autism, reversing symptoms of the neurological disorder in animals that were the human biological age equivalent of 30 years old.
Robert K. Naviaux, MD, PhD, said one of the universal symptoms of autism is metabolic disturbances. "Cells have a halo of metabolites (small molecules involved in metabolism, the set of chemical processes that maintain life) and nucleotides surrounding them. These create a sort of chemical glow that broadcasts the state of health of the cell."
Cells threatened or damaged by microbes, such as viruses or bacteria, or by physical forces or by chemicals, such as pollutants, react defensively, a part of the normal immune response, Naviaux said. Their membranes stiffen. Internal metabolic processes are altered, most notably mitochondria – the cells' critical "power plants.
"And communications between cells are dramatically reduced. This is the "cell danger response," said Naviaux, and if it persists, the result can be lasting, diverse impairment. If it occurs during childhood, for example, neurodevelopment is delayed.
"Cells behave like countries at war," said Naviaux. "When a threat begins, they harden their borders. They don't trust their neighbors. But without constant communication with the outside, cells begin to function differently. In the case of neurons, it might be by making fewer or too many connections. One way to look at this related to autism is this: When cells stop talking to each other, children stop talking."
Naviaux and colleagues have focused on a cellular signaling system linked to both mitochondrial function and to the cell's innate immune function. Specifically, they have zeroed in on the role of nucleotides like adenosine triphosphate (ATP) and other signaling mitokines – molecules generated by distressed mitochondria. These mitokines have separate metabolic functions outside of the cell where they bind to and regulate receptors present on every cell of the body. Nineteen types of so-called purinergic receptors are known to be stimulated by these extracellular nucleotides, and the receptors are known to control a broad range of biological characteristics with relevance to autism, such as impaired language and social skills.
In their latest work, Naviaux again tested the effect of suramin, a well-known inhibitor of purinergic signaling that was first synthesized in 1916 and is used to treat trypanosomiasis or African sleeping sickness, a parasitic disease. They found that suramin blocked the extracellular signaling pathway used by ATP and other mitokines in a mouse model of autism spectrum disorder (ASD), ending the cell danger response and related inflammation. Cells subsequently began behaving normally and autism-like behaviors and metabolism in the mice were corrected.
However, the biological and behavioral benefits of suramin were not permanent, nor preventive. A single dose remained effective in the mice for about five weeks, and then washed out. Moreover, suramin cannot be taken long-term since it can result in anemia and adrenal gland dysfunction.
Still, Naviaux said these and earlier findings are sufficiently encouraging to soon launch a small phase 1 clinical trial with children who have ASD. He expects the trial to begin later this year.
"Obviously correcting abnormalities in a mouse is a long way from a cure in humans, but we think this approach – antipurinergic therapy – is a new and fresh way to think about and address the challenge of autism.
"Our work doesn't contradict what others have discovered or done. It's another perspective. Our idea is that this kind of treatment – eliminating a basic, underlying metabolic dysfunction – removes a hurdle that might make other non-drug behavioral and developmental therapies of autism more effective. The discovery that a single dose of medicine can fundamentally reset metabolism for weeks means that newer and safer drugs might not need to be given chronically. Members of this new class of medicines might need to be given only intermittently during sensitive developmental windows to unblock metabolism and permit improved development in response to many kinds of behavioral and occupational therapies, and to natural play." |
Mary Edmonia Lewis was a talented American sculptor of African/Haitian and Ojibwe heritage.
(July 4, 1844–September 17, 1907) She is the first credited African Native American female sculptor in the U.S. Lewis who gained fame and recognition as a sculptor in the international fine arts world. Lewis was inspired by the lives of abolitionists and Civil War heroes.
Her father was Haitian of African descent, while her mother was of Mississauga Ojibwe and African descent. Lewis’s mother was known as an excellent weaver and craftswoman. Lewis was nicknamed “wildfire” by her mother’s Native community, the Ojibwe. Her family background inspired Lewis in her later work.
Mary E. Lewis fell on hard criticism and was accused of several crimes at Oberlin, including the theft of paintbrushes by her art teacher, and even the murder of two female students. The girls apparently drank bad wine that was served by Lewis. Although she was not convicted of either crime, the school revoked her chances of graduation.
In 1863, Edmonia Lewis found friendship with abolitionist William Lloyd Garrison. Through Garrison, she was introduced to Edward Brackett who mentored her in her craft. She would become one of the most famed artists in Boston. Her first creations were medallions with portraits of white anti-slavery leaders and heroes of the Civil War. The replicas from her 1865 bust of Black battalion leader, Robert Gould Shaw, earned her enough money to travel abroad and study in Rome. The bust is now owned by the Museum of Afro-American History in Boston.
Using inspiration from the Emancipation Proclamation, Edmonia Lewis would make her masterpiece and best known sculpture called “Forever Free” in 1867. Then ten years later, the art world would praise her piece called “The Death of Cleopatra,” because it showed a strong, powerful Cleopatra after death, unlike other artists who made her look weak. The piece is held by the National Museum of American Art in Washington, D.C.
As an artist at war, Lewis was a rare instrument for social change in the aftermath of the Civil War. Emerging in “the Athens of America,” then heading for Rome, she pressed her case for equality with help from the Republican press. Among her greatest achievements, she became the only artist of color invited to exhibit at the 1876 Centennial. She created a sensation, regularly appearing in person with her marble “Death of Cleopatra” and several other works while most artists left their work unattended.
Mary Edmonia Lewis lived in France, passed away in London, UK where she’s believed to have lived for some time after France.
“My mother was a wild Indian, and was born in Albany, of copper colour, and with straight, black hair. There she made and sold moccasins. My father, who was a negro, and a gentleman’s servant, saw her and married her.” ~Edmonia Lewis (c.1844 – c.1907)
Note: African Native American or Afro Native? People who call themselves “Black Indians” are people living in America of African-American descent, with significant heritage of Native American Indian ancestry, and with strong connections to Indian Country and its Native American Indian culture, social, and historical traditions. Black Indians are also called African Native American people, Black American Indians, Black Native Americans and Afro Native Americans. |
Anne Marie Albano, Ph.D.
Associate Professor, Clinical Psychology in Psychiatry
Director, Columbia University Clinic for Anxiety and Related Disorders,
Columbia University Medical Center
Social anxiety disorder (SAD), or social phobia, can have a crippling effect on young people. Children who avoid raising their hand or speaking up in school can become tweens who withdraw from extracurricular activities, and then teens who experience isolation and depression. In fact, children with social anxiety disorder are more likely than their peers without SAD to develop depression by age 15 and substance abuse by age 16 or 17.
As they head toward adulthood, young people with social anxiety disorder tend to choose paths that require less involvement with other people, and so cut short a lot of opportunities. Bright, intelligent young people who have yearnings to be lawyers or doctors, but cannot interact with other people, may choose a profession or work that is very solitary; or they might not enter the work force at all.
Understanding that social phobia is a gateway disorder to depression, substance abuse, and lifetime impairment, we must make it a priority to identify it when children are younger. If we can reach children in the early stages of the disorder, we can provide them basic skills to help them manage their feelings and increase their ability to interact with people.
Parents play an important role in identifying and helping children overcome social anxiety. Learning to distinguish a shy child from one with social phobia, and understanding how parents can empower—rather than enable—children with social anxiety will help our children live full, socially rich lives.
Recognizing the “silent disorder”
Social anxiety disorder is sometimes called a silent disorder because it can affect children for years before it is diagnosed. As children grow and mature, they learn how to avoid being the focus of attention at school or home; as a result, their extreme discomfort in social situations can go unnoticed.
Because children with social phobia are generally content and compliant around home, and because parents do not receive reports of misbehavior at school, many families fail to recognize a problem until their child is already withdrawn from activities and peers. By this point, the child may be experiencing extreme isolation and falling behind developmentally and academically.
Sometimes social phobia goes undiagnosed because parents confuse it with shyness. Shyness is a temperament; it is not debilitating the way social anxiety disorder is. A shy child may take longer to warm up to a situation, but they eventually do. Also, a shy child engages with other kids, just at a different level of intensity than their peers. In contrast, children with social phobia will get very upset when they have to interact with people. It is a frightening situation for them, and one they would rather avoid altogether.
Understanding the warning signs
The average age of onset is 13 years, but you can see social phobia as early as 3 and 4 years old. In young children, it may take the form of selective mutism, meaning that the child is afraid to speak in front of other kids, their teachers, or just about anyone outside of the immediate family.
In elementary school, children with social phobia may start to refuse activities and you see kids dropping out of Scouts or baseball. By middle school, they may be avoiding all extracurricular activities and social events. And by high school, they may refuse to go to school and exhibit signs of depression. (Read about SAD in children and adolescents.)
Parents can help prevent social phobia from taking hold by being attuned to warning signs and symptoms. These questions highlight warning signs:
- Is a child uncomfortable speaking to teachers or peers?
- Does he or she avoid eye contact, mumble or speak quietly when addressed by other people?
- Does a child blush or tremble around other people?
- Does a young child cry or throw a tantrum when confronted with new people?
- Does a child express worry excessively about doing or saying something “stupid”?
- Does a child or teen complain of stomachaches and want to stay home from school, field trips or parties?
- Is he or she withdrawing from activities and wanting to spend more time at home?
If a parent observes these signs, a doctor or mental health professional can help evaluate the child and determine if the disorder is present.
Understand parents’ role
For most young people, social phobia is successfully treated with therapy and sometimes medication. Additional support and accommodations at home can support recovery. For example, we know that some parents unknowingly contribute to a child’s condition by protecting them from situations that cause discomfort. If a teacher says “hello” and asks a child his or her name, the parent may answer: “His name is John. He’s a little shy.” The parent is stepping in to make the situation less stressful for their child, but a simple act like that can exacerbate the disorder because it does not help the child learn to manage the feelings and anxiety such an interaction invokes.
We need parents to take a look at themselves and how they are helping their child navigate their way into these sorts of everyday social interactions, rather than avoiding or going around them. Parents can be sensitive to the anxiety these situations cause without isolating their children from them. With the help of professionals, parents can learn to be exposure therapists, encouraging and supporting a child through the social situations that cause anxiety. (See how one teen overcame social anxiety disorder with the support of her mother and exposure therapy.)
The important thing to remember about social anxiety disorder is that there are effective ways of turning this around. Anxiety is a natural emotion and we all have the ability to harness it; some kids just need extra help developing those skills. But when they do learn these skills, it is so heartwarming to see how their world opens up and their lives improve. It is what has kept me working in this field for almost 30 years.
- What intervention would have helped you as a child in dealing with social anxiety?
- How can we educate parents about social anxiety disorder so they can help their kids to be diagnosed and treated?
- What should pediatricians, schools and community institutions do to support parents in knowing about SAD and how to help their kids? |
Online safety tips for learners
by David Brasch
Major advances in technology over the last three decades have significantly changed how we communicate with each other, especially through the Internet. Today, children are surrounded by technology. From a young age, most learn about how to use the Internet and end up more tech-savvy than many of the adults in their lives. However, being tech-savvy and knowing how to use the Internet doesn’t always mean that children understand how to stay safe online.
It’s important for children to understand the potential impact of their online activity. They need to know how to stay safe when using the Internet for school and personal reasons. Here are a few ways to help learners stay safe when using the Internet.
Avoid sharing personal information
Learners should avoid sharing their personal information like their home address, phone number, email address, and other important information in public areas on the Internet. Places like social media or chat rooms, even the comment section of a YouTube video, are all public areas that everyone on the Internet has access to. Expressing to learners that when they are interacting with people online, they should avoid sharing their personal information for security reasons, even if the learner thinks they’re only sharing the information with someone they know. Recommending to the learner to, instead, pick up the phone and call when they need to provide personal information to a teacher, fellow student, friend, or family member.
Never share passwords
Online passwords were created to lock up important information and keep it safe. It’s critical that learners keep their passwords private and never share them in public places on the Internet. Cyber security even recommends that passwords not be shared with family and close friends because unknowingly, they could in turn share it with the wrong people. Sharing passwords could result in issues like identity theft, which can take years to resolve. Overall, learners should be taught to keep passwords private and secure.
Use email wisely
Learners should always talk with their parents before opening an email attachment or clicking on a link within an email. Emails can be a place for viruses to enter and attack computers, and sometimes those viruses can steal personal information from the system. Teach learners to make sure that the email is coming from a trusted source – a teacher, friend, family member, or another educator – so they are able to judge the contents of an email they receive.
Use caution when installing software
Make sure learners talk with educators or their parents about the importance of asking permission before installing new software on their computer. Parents should review the software and verify that the source of the software is trusted and necessary. If unsure of the software’s trustworthiness, be sure to consult an IT expert in the community or school. Because some software is created by hackers with the sole purpose of attacking computers and gaining access to people’s personal information, it’s important to have a healthy level of caution.
Ignore offensive messages
Unfortunately, cyberbullying is a problem that many children face. Learners need to know that not everyone using the Internet has good intentions. Parents, counselors, and educators should communicate with students an important message about the support they can provide if a learner receives private messages or emails with offensive or insulting comments. Adults supporting the learner should be prepared to have open and honest conversations with learners about cyberbullying and support the learner during these experiences. Parents should speak with their learner’s teachers, counselors, and administrators for advice or to pursue further action, if necessary. It’s critical that learner avoid responding to negative comments, and it’s essential that the learner feel supported when facing cyberbullying.
About the Author
David Brasch is the IT Coordinator for Compass Charter Schools (CCS), and provides IT support to the CCS scholars, learning coaches, and staff. |
Thursday, June 11, 2009
NASA Cassini: Saturn's Approach to Equinox Reveals Never-before-seen Vertical Structures in Planet's RingsLabels: NASA Cassini: Saturn's Approach to Equinox Reveals Never-before-seen Vertical Structures in Planet's Rings
In images made possible only as Saturn nears equinox, NASA's Cassini spacecraft has uncovered for the first time towering vertical structures in the planet's otherwise flat rings that are attributable to the gravitational effects of a small nearby moon.
The new findings are presented in a paper authored by Cassini imaging scientists and published today online in the Astronomical Journal.
The search for ring material extending well above and below Saturn's ring plane has been a major goal of the imaging team during Cassini's "Equinox Mission," the two-year period containing exact equinox -- that moment when the sun is seen directly overhead at noon at the planet's equator. This novel illumination geometry, which occurs every half-Saturn-year, or about 15 Earth years, lowers the sun's angle to the ring plane and causes out-of-plane structures to cast long shadows across the rings' broad expanse, making them easy to detect.
In recent weeks, Cassini's cameras have spotted not only the predictable shadows of some of Saturn's moons, but also the shadows of newly revealed vertical structures in the rings themselves. And these observations have lent dramatic support to the analysis presented in today's publication that demonstrates how small moons in very narrow gaps can have considerable and complex effects on the edges of their gaps, and that such moons can be smaller than previously believed.
The 8-kilometer-wide (5-mile) moon Daphnis orbits within the 42-kilometer-wide (26-mile) Keeler Gap in Saturn's outer A ring, and its gravitational pull perturbs the orbits of the particles forming the gap's edges. The eccentricity, or the elliptical deviation from a circular path, of Daphnis' orbit can bring it very close to the gap edges. There, its gravity causes larger effects on ring particles than when it is not so close. Previous Cassini images have shown that as a consequence, the moon's effects can be time-variable and lead to the waves caused by Daphnis to change in shape with time and with distance from the moon.
However, the new analysis also illustrates that when such a moon has an orbit inclined to the ring plane, as does Daphnis, the time-variable edge waves also have a vertical component to them. This result is backed by spectacular new images taken recently near equinox showing the shadows of the vertical waves created by Daphnis, and cast onto the nearby ring, that match the characteristics predicted by the new research.
Scientists have estimated, from the lengths of the shadows, wave heights that reach enormous distances above Saturn's ring plane -- as large as 1.5 kilometers (1 mile) -- making these waves twice as high as previously known vertical ring structures, and as much as 150 times as high as the rings are thick. The main rings -- named A, B and C -- are only about 10 meters (30 feet) thick.
"We thought that this vertical structure was pretty neat when we first saw it in our simulations," said John Weiss, the paper's lead author and a research associate of Cassini imaging team leader Carolyn Porco, another co-author on the paper, in Boulder, Colo. "But it's a million times cooler to have your theory supported by such gorgeous images. It makes you suspect you might be doing something right."
Also presented in the paper published today is a refinement to a theory used since the Voyager missions of the 1980s to infer the mass of gap-embedded moons based on how much the moons affect the surrounding ring material. The authors conclude that an embedded moon in a very narrow gap can have a smaller mass than that inferred by earlier techniques.
One of the prime future goals of the imaging team is to scour the remaining gaps and divisions within the rings to search for the moons expected to be there.
"It is one of those questions that have been nagging us since getting into orbit: 'Why haven't we yet seen a moon in every gap?'" said Porco. "We now think they may actually be there, only a lot smaller than we expected."
Images showing the shadows cast by the vertical waves on the edges of the Keeler gap can be found at http://ciclops.org, http://saturn.jpl.nasa.gov and http://www.nasa.gov/cassini .
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA's Science Mission Directorate, Washington. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team consists of scientists from the U.S., England, France, and Germany. The imaging operations center and team leader (Dr. C. Porco) are based at the Space Science Institute in Boulder, Colo. |
Back Lake Mary Charter
Students in Pre-Kindergarten will observe and record the growth cycle of plants in hydroponics and soil based gardening.
Students in Kindergarten will conduct experiments and participate in learning how to set-up and grow various plants by way of
hydroponic gardening and soil based gardening. Students will record growth rates and plant yield through time-lapse video and
logging measurements and production. Students will learn the benefits and/or disadvantages of growing their own produce with
and without the use of soil and using alternative rnethods of replacing nutrients.
1. The Way to Grow project is innovative in that it teaches kindergarteners alternative methods to growing
plants. Teachers in elementary generally teach the life cycle of plants through traditional ways of
growing plants. By teaching and comparing new methods along with traditional methods we will
challenge our younger students to think of ways other things can be improved upon or done differently.
K.MD 1: Describe measurable attributes of objects, such as length or weight. Describe several measurable attributes of a
K.MD 2: Directly compare two objects with a measurable attribute in common to see which object has "more of"/"less of"
the attribute, and describe the difference.
SC.K.N.1.1: Collaborate with a partner to collect information
SC.K.N.1.3: Keep records as appropriate-such as pictorial records-of investigations conducted
SC.K.N.l.S: Recognize that learning can come from careful observation
SS.K.E.l.ln.d: Identify basic needs, such as food and clothing.
SS.K.E.l.Pa.d: Recognize a basic need, such as food or clothing.
LA.K.G.l.ln.a: Identify information in pictures and symbols.
LA.K.6.2.1n.a: Ask about a topic of interest and recognize the teacher as an information source.
LA.K.6.2.1n.b: Use information from pictures and symbols to answer questions.
LA.K.6.2.1n.c: Contribute information for a simple report where the teacher is the scribe.
The project fits into our Science and Math curricula as it addresses the life cycle of plants and using measurement to
compare growth. It fits into the Reading/language arts as it addresses using information and preparing a report.
3. The project will encourage long-lasting change in the classroom and community as it will provide a bridge to learning by
connecting with professionals who will mentor and encourage students to explore ways to improving the environment
by using sustainable methods of growing.
4. Technology utilized will provide visual records and comparisons of the growth of plants in each environment. It will also
serve as a way to document progress via charts and/or spreadsheets.
5. Student gain will be evidenced through teacher observation, students' work and assessment.
6. Students will share project results with the community by posting pictures, videos and results on their classroom web
site and local media. |
Aplomado Falcon Finds New Home With the Help of GIS and Imaging
Aplomado, the Spanish word for dark gray, refers to the coloring of the top feathers on the remarkable Aplomado Falcon, which was once a common raptorial (predatory) bird in the coastal and interior grasslands of the American southwest. Known for its striking color, majestic posture, and graceful flight, the species all but disappeared from the United States during the 1950s and was listed as endangered by the U.S. Fish and Wildlife Service in 1986. Starting in the 1990s, increases in reliable falcon sightings prompted additional interest in recovery of the species in New Mexico.
Recently, researchers at New Mexico State University (NMSU) employed remote sensing and GIS applications to evaluate millions of acres in the Chihuahuan Desert to identify habitat features most likely to sustain a population of the endangered birds. The Chihuahuan Desert region stretches from the Rio Grande Valley in southern New Mexico far into Mexico. It is 1,200 miles long and 800 miles wide. The final products, a documented predictive model and a map depicting habitat suitability across a large portion of the species' range, are aiding in prioritizing areas for conservation consideration and making land use decisions that benefit falcon habitat restoration.
The U.S. Bureau of Land Management; the U.S. Army White Sands Missile Range; Fort Bliss Military Reservation; and T&E, Inc. (Cortaro, Arizona), funded the eight-person NMSU research team, which was charged with providing a better understanding of the Aplomado Falcon's natural history by describing falcon use areas in northern Chihuahua, Mexico. The GIS predictive modeling section was part of a five-year research endeavor that consisted of three phases. The first and second phases involved surveying the Aplomado Falcon habitat in Mexico's Chihuahuan Desert to locate and describe the physical features of the landscape where the birds exist--with the help of staff at the Universidad Autonoma de Chihuahua. The third phase involved analyzing satellite imagery as well as terrain data derived from digital elevation models of the Chihuahuan Desert spanning northern Chihuahua, southern New Mexico, and western Texas to digitally locate the features identified as indicators of possible falcon habitat.
"This research should help to focus conservation on habitats that benefit this bird and promote overall integrity of grassland communities," states Bruce Thompson, professor of wildlife sciences and leader of the New Mexico Cooperative Fish and Wildlife Research Unit (NMCFWRU), the joint federal-state-university program at NMSU. Kendal Young, research project coordinator, NMSU, adds, "The presence of falcons as an indicator of an overall healthy environment makes this research even more relevant for human beings." Because this slender, long-tailed falcon (15-18 inches in length with a three-foot wingspan) with a distinctive white line located below the black cap on its head does not build its own nests, but takes over the abandoned nests of other large birds, a thriving ecosystem of other large birds and small prey is needed to sustain an Aplomado Falcon population.
Analyses were conducted using Esri's ArcGIS, Leica Geosystems' ERDAS IMAGINE, and FRAGSTATS public domain software packages. "The extensive cartographic functionality of ArcGIS, as well as the versatility of the Esri format made Esri software a logical choice," states Dawn Browning, GIS/remote sensing analyst, NMSU. Use of the three components was key to the project's success because they produced the accurate results needed; ArcGIS interacted with both ERDAS IMAGINE (provided under an educational agreement) and FRAGSTATS applications; and ArcGIS was familiar to most end users of the final model, a critical factor in the project.
To visually identify the land cover patterns that corresponded to those found in the Aplomado Falcon habitat, one set of imagery was collected for spring and fall seasons that produce different vegetation responses found in the Chihuahuan Desert. "The falcons require a combination of vegetation types: grasslands (for their prey base) with shrublands (where they perch and nest)," says Browning. Because of cloud cover conditions, each set of 15 LANDSAT 7 ETM+ data imagery was collected over a five-week span.
ERDAS IMAGINE software was used to import, reconcile, and analyze the two sets of data images covering the study area of 246,848 km2. After the multispectral data were imported, the digital values were converted to spectral reflectance values to describe the vegetation around the habitat. Using the histogram bias technique, the images were standardized to a single date for each season while maintaining the true shape and distribution of the data in the image. When both data sets were standardized (each roughly 20 gigabytes), the imagery was evaluated for spectrally distinct classes contained within the entire study area for both seasons.
The distribution of falcon use sites among the land cover classes was examined to identify classes that corresponded with falcon presence. "Information regarding landscape structure, such as interspersion between important classes (e.g., grasslands and shrublands), was extremely helpful in examining aspects of the environment surrounding falcon use areas," says Julie Lanser, a geography graduate student involved in this part of the research.
Once converted to ArcGrid, FRAGSTATS software was used on the classified images to calculate landscape metrics around falcon use sites using the thematic grids as input. This information, coupled with that of the configuration and composition of land cover classes within a larger landscape, was used in the habitat modeling process.
Using ArcGIS, five predictor variables were converted to binary grids and added to create an output map representing a range of Aplomado Falcon habitat suitability. Higher values in the map represent areas where a greater number of qualifying criteria were met, and lower values represent areas where fewer criteria were met. The binary input layer and final predictive model grids were converted to images in ERDAS IMAGINE; then all files were combined into one.
Accuracy assessment analyses determined that the resulting model was highly effective in predicting "places of promise" for Aplomado Falcon conservation. "We have at least 67 percent agreement between the field assessed and predictive model rankings at evaluated field sites," states Browning.
Errors were largely attributed to differences in assigning like predictive values between field biologists and predictive model values. Each of 21 prospective or known falcon habitat assessment areas that were identified independent of spatial modeling contained habitat with high predictive ranking. Cartographic production was performed using the ArcMap tool within ArcGIS Desktop. The resulting predictive model and map of suitable Aplomado Falcon habitat now serves as an effective tool for identifying areas similar to falcon use areas in Chihuahua.
"Using GIS software to access information about the configuration and land cover from the imagery enabled us to analyze an incredibly large area, while maintaining a comprehensive view of the entire project," comments Thompson.
The NMSU research unit spearheading the Aplomado Falcon habitat suitability project is one of 39 units across 37 states that function under a U.S. Geological Survey cooperative agreement among federal and state agencies, universities, and private organizations focused on preventing the deterioration of the nation's natural resources. The research results, made public in late summer 2002, will assist government agencies in making informed decisions about the allocation of federal resources as well as environmental and development planning.
For more information, contact the NMCFWRU Aplomado Falcon habitat suitability project (tel.: 505-646-6053, fax: 505-646-1281, Web: leopold.nmsu.edu/fwscoop). |
By the end of article #4 in this series, I had presented the new vocabulary for Christmas presents and repeated it with the class using a song (12 Days of Christmas) and various activities to be done in pairs, but we hadn’t looked at any techniques for remembering the gender of the nouns, and as that is particularly important in this unit (we’re coming up to using preceding direct object pronouns in the perfect tense and agreements), that’s what we’ll look at here. Quite unusually for me, the song is basically a list of vocabulary. There are all sorts of ways songs can be used… but that’s another post, another day.
This activity is ideal for when you need learners to keep 2, 3 or 4 categories completely separate in their minds. I have used it in French & Spanish and some colleagues of mine have used it in German and Italian. I have used it in KS3, 4 and 5 to help pupils distinguish between masculine and feminine nouns, masculine & feminine adjectives where there is a difference in pronunciation (vert / verte; blanco / blanca / blancos / blancas), and for past, present and future tenses. I have also on occasion used it for government of verbs in French in KS5 (distinguishing between those verbs which take à and those which take de) and between subjunctive and indicative forms. As activities go, this has a very high success rate, success being understood here as the vast majority of the class remembering, internalising and using the concepts way beyond the lesson and unit in which they were encountered, and very reliable for me in terms of gauging within the lesson how many and who have grasped it. It is also extremely simple to prepare and run. You can use it with a whole class and, much more importantly from my point of view, run it as a pairwork game, thus increasing the intensity and involvement in the activity. As such, it is my methodological weapon of choice when teaching these items of language.
Let’s set a bit of context. I start off the lesson (the second or third in the sequence, depending on how long it took to present and practise the vocabulary) by setting the homework. Always best to set it first thing, I find. It gives pupils who don’t understand what I’m banging on about time to ask questions if they need to and I don’t set myself up for the stress-inducing situation at the end of a lesson where I’m frantically trying to set up the homework, the class doesn’t understand and I know I’ve got to get this lot out of the room and off the corridor before the next lot arrives. And then the homework is a load of rubbish when it comes in! So the start of the lesson it is. I give the class a strip of paper, about 1/3 size of A4, with a dozen sentences on it (take a look at the powerpoint in the first post). The sentence (repeated a dozen times) is basically this: “Le jour de Noël mes parents m’ont offert une chaîne stéréo mais quand je l’ai ouverte il y avait un problème.” Each time there is a different present. Some of the sentences contain mistakes, some don’t. All of the mistakes centre around the direct object pronoun and the agreement. The pupils have to work out whether the sentence is correctly written or not, and if not, correct it.
Step 1: Get the pupils to stick the sheet in their books.
Step 2: Get the pupils to record the homework in their homework diaries: Il faut cocher les phrases correctes et corriger les phrases qui contiennent des erreurs. Date limite: …
Step 3: Start explaining! Normally I’d have a bit of back-and-forth with the class about the homework, negotiating (apparently) a deadline, arguing about how much I’m giving them, speculating on what they might have to do this week (before they’ve seen it) and so on, but not this time – I know I’ll be under some pressure of time to get to the end within the lesson, so full steam ahead.
Well, I say that Step 3 is when I start explaining. In reality, the whole of the lesson is an explanation, or more accurately, a demonstration. I like to get the whole administrative side of setting the homework out of the way first, then books closed, then I’ve got them all with me before I start. As they look at the homework, they won’t know what to do. I acknowledge that, tell them not to panic, that now at (for example) 9.00 a.m. they don’t know what to do, but at 9.45 a.m. (or whatever time the lesson finishes), they will, and these activities will make it possible. Straight into Gender Walls:
Here, I want pupils to remember perfectly which nouns are masculine and which are feminine. I need an A5 flashcard for each noun, preferably masculine ones on blue card, feminine ones on red card, because these are the colours I used on my visualiser sheet when introducing the vocabulary earlier. Failing that, white card with blue/red ink. There is nothing flashy about these flashcards – just the words will do. You can have the pictures too, but you must have the words written down and large enough to be seen across the room.
I blu-tack/attach with a magnet all of the masculine singular nouns in a line along the left-hand wall in a random order. I tell them that all of those nouns constitute a category and I ask them what it is. If they are used to this sort of thing they will tell me straight away, otherwise I might need to give them a nudge: Masculin. I make sure they can pronounce un correctly. (A small point, but the next stage will break down if this isn’t clarified). I give the class 5 seconds to memorise the order of the flashcards and then turn to face me as I stand on the right-hand side of the room. I pick on some poor, sleeping pupil and they have to tell me the vocabulary in order without looking at the flashcards: Le jour de mon anniversaire, mes parents m’ont offert un rasoir électrique, un jeu-vidéo, un appareil-photo… 5 seconds isn’t long enough to remember the order perfectly before starting the activity so that when they make a mistake I can jump in with: Menteur! Tes parents t’ont offert… and then the item they forgot or got wrong. They go back to the beginning and start again. I keep a count of how many times they had to start again, and then it’s another pupil’s turn after I’ve jumbled the order again. The winning pupil is the one who has to start again the least number of times. Then it’s over to the class to do it in pairs with one (guessing) pupil looking away, the other looking at the cards. That’s why you need the vocab written on the flashcards – it doesn’t work if there are any doubts as to the vocabulary.
At the end of the activity I put the homework back up on the screen. Clear now? NO?! Of course not, but it’s not 9.45 a.m. yet. Don’t panic. On to the next stage of Gender Walls.
Next up on the wall are the feminine singular nouns, but they need to go on the opposite wall to the masculine nouns. Sticking cards up on a wall, like giving out books, collecting in work or just looking for something on my desk, is of course a transition, and in my book, any transition is a danger point for ‘losing’ a class or giving them a cue to start chatting which I’ve then got to step in and stop. Especially so here, as I’ll probably have to turn my back on them, never a safe moment… So, rather than do that, just before each card goes up, I give them a very quick paraphrase of whichever card I happen to be holding for them to guess before I show them what it is and stick it on the wall. This gives me a pause between putting each card up, an opportunity for language practice (I always say the sentence very fast) and they don’t have time to “make their own entertainment”. What is going on in their heads this time I paraphrase is different to what took place when I introduced the vocabulary in the first place – they know what all the answers are this time, they just don’t know which one is the right one, and the paraphrase will get them to their answer. So why not talk as fast as you possibly can? It ups the ante and it’s better than a CD for listening practice…
Again I say that these red cards represent a category. What is it? Féminin! Alors, un ou une? Une! This time, pupils don’t have to remember the order of the nouns, but close their eyes and tell me which wall they are stuck to. “Le jour de Noël, mes parents m’ont offert ****** chemise.” (Beep out the article with a bicycle horn or some other noise you might even produce yourself). The class responds with “un chemise / une chemise” according to what they think is right and point to the correct wall. The pointing is very important and they may need a nudge to realise that they have to do it. You can see quite clearly who’s getting it and who isn’t. (Much more reliable than asking a class to give you a thumbs up / down response as to whether they think they understand something.) I repeat the same thing with a few more vocabulary items, randomly switching between masculine and feminine nouns. And then? In pairs, of course! They swap over when you say so. Activities such as this don’t really need instructions, you just do them. In fact, if you explain it, it’s more likely to stall. Just do it.
This activity works so well because the feminine adjectives are the “new information”. They have done so much with the masculine nouns that they know these words they are now including were not part of that first category. Try it, you will be surprised!
At the end of the activity, up goes the homework on the screen again. All clear? No?! Of course not, it’s still only 9.25! Straight onto the next stage of Gender Walls:
Exactly the same as Stage 2, except this time instead of pupils just giving you the indefinite article, they respond “mais quand je l’ai ouvert / ouverte” as appropriate. Before I let them loose on this I need to make sure they understand what they are saying. Take a look at the powerpoint in the first post in this category. I back up the meaning with mimes and full sentences so that they get the context, and I point out that the word order is different in French compared to in English (Je l’ai ouvert / I opened it), but without actually using English. This can easily be communicated by saying the French sentence correctly and then saying the same words in an English order (le at the end), and some arm-crossing to show how the sentence has a different pattern. It’s important that the last thing they hear from me, though, is the correct version. When I elicit the response, “mais quand je l’ai ouverte”, we make a T with our hands to emphasise the difference in pronunciation. I do a couple of examples with the class, and then it’s back to doing it in pairs. For all of these quick pairwork activities, I let them run for about a minute, or a minute and a half before getting pupils to swap over. It’s tempting to stop them too early, but it’s important to give pupils the thinking time so that the activity can be effective. It also gives me a break in the lesson to draw breath and think about where it’s all going and whether I need to go back a bit, hurry up a bit or put in another stage somewhere. How often do we think about what we are doing as the teachers, when really it’s what they are doing as the pupils that really matters?
At the end of the activity, we take another look at the homework. Ah! It’s getting clearer now! But there are still a few more holes to plug. On to the next stage:
This time the masculine plural nouns are added to the wall, the same wall as the masculine singular nouns, but along a bit so they are clearly separate. I go back to the powerpoint slide I used in Stage 3 and see if they can work out how the sentence should be different when, instead of using un rasoir électrique, we use des patins à roulette. (Je l’ai ouvert > Je les ai ouverts). Some of them will probably get it. In any case, I need to show them. Again they do the same activity, pointing to the right wall with their eyes closed and filling in the sentence. This stage in the activity helps them to distinguish very clearly between Je l’ai and Je les ai, a tiny difference which would sail right past many pupils. A quick pairwork, then the feminine plurals can go up on the wall, on the same wall as the feminine singular, but along a bit. By this point, many more pupils are likely to guess how the sentence will look before they are shown. Now the activity can be run with pupils pointing in up to 4 different directions, depending on whether the noun they are given is masculine, feminine, singular or plural.
Finally (almost), the homework goes up on the screen again. All clear now? Yes!
This may seem like a very long-winded way to do something which could be explained much more quickly in English. I would agree, it would be much quicker to do it in English but, in my view, a wasted opportunity. It gives pupils the chance to learn through the target language something they didn’t already know. It’s not a mere re-labelling exercise, this is a concept they don’t already have. It teaches them that the foreign language is capable of expressing complex things and not just ordering ice-creams. It shows them that I don’t have to revert to English to express important or complicated things, and in this way it gives the language status. A quick trip to the maths department will prove that pupils have to think very hard in that subject. Why should it be so different for languages? It also doesn’t have to be long-winded. A lot depends on the pace we maintain as we present a lesson like this, and the back-and-forth between the teacher and the class, between whole-class work and pairwork, does a lot to maintain pace and sustain concentration in a way which keeps the class busy, breaks up the information input and gives me a break as well.
But there is still one more stage I want to put in before I leave it to the class to crack on with the exercise at home. On the powerpoint you will find a number of sentences (some correct, some incorrect) which, as a class, we discuss very quickly: Bonne phrase ou mauvaise phrase?
“Le jour de Noël, mes parents m’ont offert une raquette de tennis, mais quand je l’ai ouverte, il y avait un problème”
Raquette de tennis – masculin ou féminin? Féminin! (pointing to the right wall). Ouverte, masculin ou féminin ? Féminin ! Alors, c’est bon ? Jusqu’ici, c’est bon. Une raquette de tennis, c’est singulier ou pluriel ? C’est singulier ! Ouverte, singulier ou pluriel ? Singulier ! Je l’ai, singulier ou pluriel ? Singulier ! Alors, c’est une bonne phrase ou une mauvaise phrase ? Bonne phrase !
And in this pretty straightforward way we look at the other sentences on the powerpoint, deciding whether they are correct or incorrect. This helps pupils to see the relationships between the various elements of the sentence, and there is no need to go into English to do it.
Now they really are in a position to get cracking on their homework … and it’s just coming up to 9.45….
Next time? My favourite activity of the lot! |
What do we want to achieve?
Our aim is to help the learner improve their overall digital knowledge and skills, develop computational thinking and stop being intimidated by strange words and unknown concepts like coding and many others. Therefore at the end of each unit and even this Introduction we included a glossary, defining some underlined words and explaining them in a sentence or two if they had not been explained within the unit.
Computational thinking is at the heart of the learning that we advocate. It is the thinking process that underpins computing and digital making: formulating a problem and expressing its solution in such a way that a computer and yourself can effectively carry it out. Computational thinking covers knowledge and skills including, but not limited to logical reasoning, algorithmic thinking, pattern recognition, abstraction, decomposition, debugging, problem solving. Do any of the words and concepts sound familiar to you? We are sure they do. All you will have to do is to apply them to the digital world. Now, let’s take the first steps on this journey. |
Chapter 11: Local People Speak of Slavery
Black and White Views on Addressing Slavery
Slavery initially met little support as an interpretive topic, although individual interest in interpreting it varied somewhat with age and social features, such as ethnicity and kinship, business, or labor relationship to the Hertzogs’. Several older blacks – former Magnolia tenants – and whites, appeared uncomfortable even discussing the possibility of considering slavery. They avoided eye contact with Crespi, for example, and hesitantly addressed the topic. The two local interviewers themselves objected to raising the topic. Nearly everyone’s immediate response was that slavery was an unacceptable interpretive topic. Some whites were concerned with how outsiders, including visitors from other regions and with other views, would perceive local people and cultures if slavery was discussed. Some elderly black former tenants, who found discussions of slavery untenable, commented that managers and their treatments of the labor community could differ significantly among plantations. Unlike other places they had heard of, “there was no brutalizing at the Hertzogs’.” They thought this plantation had a high regard for its labor community. In these public conversations, they dismissed the possibility of mistreatment by conceptually excising Magnolia from the mainstream 19th century plantations with the comment that “it’s in a class by itself.” White and black people preferred interpretive programs that would highlight events of the present century, especially “our times,” the times they remembered and often enthusiastically described. Their reluctance to conceive of slavery as an acceptable public topic tended to diminish as the conversations progressed, or during subsequent conversations, when the initially reluctant black and white respondents modified their initial opposition to public discussions of this thorny issue.
The initially reticent black people and whites came to agree with those few who had unambiguously supported an interpretation of slavery that the topic was legitimate. But, people insisted, it should not be the major or single focus. Slavery was acceptable only if presented as one phase in a historical sequence of phases that ran the gamut from Magnolia’s inception through its transformation from a traditional plantation to the presently mechanized farming operation. The National Park Service was asked to contextualize the story in terms of political, demographic, economic, and other conditions so that slavery would not be presented in isolation as a comment about local morality, or the lack of it, or a comment about economic decisions alone. Rather, slavery should be portrayed as a response to diverse regional and national conditions.
The language of slavery also drew some comment. Several whites and black former residents noted the offensiveness of describing the cabins as “slave quarters.” Just call them “quarters,” they argued, because a succession of different categories of workers occupied them and it is misleading to depict the cabins as just housing one kind of worker. Moreover, several black people who had occupied the cabins as day laborers, explained, “if you say people lived in the slave quarters in this century, you are implying they are slaves.” Former resident laborers asserted that the park must “make it clear that tenant laborers were not slaves.” One black person who had grown up in Magnolia’s sharecropper area was even offended at the suggestion that she might have been a tenant in the quarters. On the other hand, because the cabins were constructed to house enslaved people, some whites argued, they should be called “slave quarters.”
Blacks and whites without direct ties to Magnolia or who were several generations removed from the resident labor force seemed more comfortable speaking of slavery. They suggested that “the National Park Service must talk about slavery because that’s what made the plantation work and we can’t make believe it didn’t happen.” Another noted that “slavery must be mentioned because it is the background for discussing how Magnolia evolved from slavery to respect for colored people.”
Blacks Suggest Topics on Slavery and Its Aftermath
One black respondent from urban Natchitoches, without direct ties to Magnolia or the countryside, vigorously argued for a generic discussion of slavery as a despicable “evil institution” that dehumanized people, depriving them of the skills and education required to effectively survive after manumission. To convey to visitors the full impact of an institution that enslaved the mind and the spirit, he observed, the aftermath of slavery needed explication. “The tragedy of slavery was that some people didn’t know how to deal with life without having someone there to tell them. Slavery kept people from developing survival skills.” Moreover, he argued, slaves must not be portrayed as passive victims. They were also people who created strategies to protect themselves and also planned and implemented insurrections. Draw attention to the possible presence of the Underground Railroad, he argued, and to the Lemee house on Jefferson St., which might have been a station on the Underground Railroad.
A young black person whose grandparents were from the Hertzogs’ place argued that it was imperative to share the plantation history with younger people. Otherwise, the story would be lost in the future and young people would not know of the past difficulties. Indeed, several people prevailed upon the National Park Service to convey “old time talk” or black history to younger people so they could better appreciate their elders’ experiences as well as their presently improved situation. One individual was forceful about needing to “preserve the memories of our people from generation to generation but, when we speak to blacks, we must talk about slavery with compassion. Whites will be in the audience too and because hardships were suffered by everyone, black and white, you must tell the story from white and slave perspectives. Talk about harsh things too, and good things.” He added, as others had, the National Park Service should “end the story where it comes out now. Even if things may not be the way everybody wants them, they still progressed to a degree.” Blacks are among the professional people now, and that should be made clear. Still, said another who found the slavery topic difficult, “if you must talk about it, then get into it and get out”; do not dwell on it.
Few black respondents discussed the emotional pain their parents and grandparents might have experienced at the plantation or Natchitoches parish in general. One, however, recalled how tearful her father became when describing what older people said and how difficult it was for him to talk to her about it. This same woman added that people refused to discuss the past, whether the topic was slavery, abolition, or Jim Crow eras, because they were too hurt and angry; they don’t want to remember a past that robbed them of their humanity. “Why did black men hang from the trees?” she asked rhetorically. And “how could some people tell other people to go to the back of the bus or not drink from their water fountain or get off the sidewalk? God wouldn’t be a just God if he would let this still happen.” Its difficult to talk about, this individual recognized, but, if the National Park Service is to discuss those days, “the Lord will show you how to talk about this in a way that doesn’t offend people, but to speak as necessary; not to hurt people or create pain, but to make them understand more.”
Whites Suggest Topics on Slavery and its Aftermath
Several suggestions were made about sequential changes that would express the plantation’s evolution and also give visitors a sense of a dynamic rural scenario. One proposed sequence is:
Another suggestion was to organize interpretation around major decision points, such as:
Whites also suggested discussing occupational diversity among slaves as a way to describe social as well as economic and political relationships between enslaved people and their owners. Another potential topic was the hierarchical relationships within the slave community that were based partly on the different occupational roles and could lead to greater advantages for some slaves. Concubinage as an acceptable relationship between French Creoles and Creoles of color in the 19th century was another suggestion.
Whites also raised the economics of slavery as another possibility. They thought it would be instructive to show that slaves were defined and treated like commodities in an economic system in which decisions to use slaves reflected business considerations and market-based economic rational choices. Another suggestion was to consider the operation of the plantation commissary and the forms of debt peonage based on payment with scrip/coins that perpetuated the system of labor dependency even after slavery was abolished. It would be important, people suggested, to show both sides of the system, using ledgers and other historic documents that tell a story about people and the economics of slavery over the past few hundred years. The story might also be told by using different cabins at Magnolia to interpret the historical sequence from slavery to the end of tenancy.
The potential racial composition of the interpretive staff raised some interest. One white person wondered about the race of the future park interpreters and mentioned his own discomfort—a feeling of being targeted—when he visited colonial Williamsburg and a black interpreter told the story of plantation slavery. The respondent was not suggesting limiting the staff to white people but raising the relationship between ethnicity and interpretive roles as a discussion point.
Creoles of Color Views on Slavery
Creole participants in this study came primarily from the heritage area, not Magnolia. Many were the offspring of planters or landowners themselves, mostly of modest holdings, and some were offspring of people who once sharecropped or rented plantation land along Upper Cane River. They necessarily brought a different personal history and interpretive grid to the discussion of slavery, partly because some of their ancestors might have been slaves and others were recalled as landholders who depended on slaves themselves. At different times in their own family histories, different ancestors might have played both roles. Interpreting slavery to the visiting public was not a difficult or contentious issue to this group. They seemed to agree that slavery “should be presented like it was a sign of the time, not that anyone blessed it or thought it was right.”
Noting that their Creole ancestors also ran slave-driven plantations, some people perceived slavery as a rational economic choice that responded to labor needs prior to the introduction of mechanized farm equipment. As one person commented, “If you had land, you had to have some help in the days before John Deere.” He added, “If you had any intention of surviving in business then you use the labor that’s available; in this case it was slaves and it happened that slaves were darker skinned.” It was business, not morality, that prompted planters to buy and sell slaves, he argued.
Considering slavery as implying a reciprocal economic relationship, one person remarked that the owner was responsible for people from birth to death. Slaves, on the other hand, had a job to do and were paid for it in food, clothing, and shelter. In addition, Magnolia’s brick quarters were cool, making for a more comfortable, healthy, cabin. One person illustrated his argument about slavery as a rational economic choice by adding that use of bricks did not reflect the landowners’ graciousness as much as the need to have slaves making bricks for other structures. Further, he said, it was important to keep slaves occupied throughout the year, even when agricultural demands had peaked, and brick-making kept the workers busy. The Creole respondents also perceived their landed ancestors as having given slaves relative freedom and no mistreatment in order to avoid encouraging counter-productive, unsatisfactory job performance or runaways. Compared to South Louisiana where, they said, cruelty to slaves was common, Cane River planters were lenient.
A point no others had made about slave history was offered by one Creole woman who observed that slaves had their own complex history. Slaves were often “real classy” people when they were in Africa but were brought here to be treated without dignity. Despite that treatment, she continued, some blacks were very smart and capable and became mechanics, carpenters, and preachers.
Events, Activities, and Perspectives to Avoid
These emphasized the need to be treated with dignity and not stereotyped:
Creole of Color Concerns: |
Beacon Lesson Plan Library
Santa Rosa District Schools
Students receive math fact cards. They review their cards and solve each fact. When the teacher writes an answer on the board, the student brings the fact card and receives a sticker if it is correct.
The student solves basic addition facts using concrete objects and thinking strategies, such as count on, count back, doubles, doubles plus one, and make ten.
-Addition and subtraction fact cards, 3-5 per student
-Ziploc bag of 20 counters, one per student
-Scrap paper, one piece per student
-Chalkboard or dry erase board
-Stickers for reward
1. Copy/use fact sheet from the classroom's math curriculum to prepare addition and subtraction cards. Have enough fact cards ready for each student to receive 3 to 5.
2. Have box of stickers ready.
3. Have scrap paper ready.
4. Put 20 counters in little Ziploc bags for each student.
1. Pass out 3 to 5 addition and subtraction fact cards to each student.
2. Pass out the Ziploc bags with the counters. (I use little red circles that are made out of cardstock that I also use for playing bingo.)
3. Pass out the scrap paper to each student.
4. Tell them: Class, the first thing you will do is stack your cards in one pile. Now take out your pencil and write down the first problem on your scrap paper. Do you know the answer to this problem? If so, write it down. If not, then pull out your counters to help you get the answer.
5. For example, I have a problem here that I just can not remember, 3 + 4. So I take out my little bag and pull out 3 counters and then I pull out 4 counters. Now what do I do? (student response) That's right, I count all of them to get my answer. What is my answer? (student response) Very good, it is 7. When you finish with a problem, put the counters back in the bag so you don't get confused on your next problem.
6. Now you do the next one. Look at a card you have, write down the problem, and calculate the answer. If you don't know the answer, just use your counters.
7. When all students are ready, the teacher calls out a number and writes it on the board. For example, the number 3 is written on the board. All the students who have a math fact with the answer of 3 are to bring the card/cards to the teacher. Each student who brings the correct card receives a sticker for each math fact (could have more than one card). The sticker serves two purposes--it rewards the student instantly for the correct answer and the students focus on the task at hand.
8. Once all of the cards that have the answer 3 are brought to the teacher, then another number is written on the board. For example, number 10 is written on the board. All the students who have a math fact where the answer is 10 are to bring the card/cards to the teacher.
9. This continues until all cards are brought to the teacher.
10. If students still have cards, then the teacher assists them in solving the math problems. This also assists the teacher in evaluating which students need remediation.
There are 3 forms of assessment. First, the teacher observes the students using the concrete objects to solve the problems. Second, the students write down the problems and answers on the scrap paper. Third, they give the teacher the appropriate fact card when an answer is written on the board. All three ways assist the teacher in evaluating whether the students know the math facts. If a student gets a wrong answer the teacher can say: Ok Adam, show me the counters for the first number of this problem (5). Now show me the counters for the second number (2). What do you do now? Student: Count all of the counters together. Teacher: Good, go ahead and count them. What did you get? Student: 7. Teacher: Very good! Now correct it on your paper and bring me the math card. |
Mars Rounded Pebbles: Do Rocks Discovered By NASA Indicate An Ancient River?
Rounded pebbles found on the surface of Mars provide evidence that water once flowed on the planet, according to a new study published in Science.
The Martian pebbles were discovered in pictures taken by NASA's Curiosity rover, which found densely packed pebbles in several areas. Researchers analyzed photos of 515 stones, concluding that the size and rounded shape indicate rocks which traveled in water, perhaps in the kind of river that isn't able to exist in the cold, arid climate of modern day Mars.
Like Us on Facebook
"We know it was a streambed because it takes a fast flow to move pebbles of this size, and they're rounded," said Dawn Sumner, a University of California, Davis researcher and co-author of the study. "The rounding requires that they're banged against each other and the sand a huge number of times to break the edges of the rocks. It's like how you polish rocks in a polisher, you hit them against each other over and over."
Sumner added that the rocks -- which were rounded by fluvial abrasion, to use the technical term -- are believed to be at least two billion years old. Given the kind of pebble-rounding researchers saw, Sumner says that the ancient stream must have flowing "for a long period of time over a long distance."
"You aren't going to get rounding with transient water or a flash flood," Sumner said.
Another way you aren't going to get that rounding is from wind abrasion. Rocks worn down by wind would be more rough and angular than the rounded pebbles on Mars are.
Finding the rounded pebbles wasn't a result of serendipity: the main reason NASA chose to land the Mars rover between Gale Crater and the base of Mount Sharp was to study layered rocks there.
'We knew there was an alluvial fan in the landing area, a cone-shaped deposit of sediment that requires flowing water to form," said Sumner. "These sorts of pebbles are likely because of that environment. So while we didn't choose Gale Crater for this purpose, we were hoping to find something like this."
The discovery of the round pebbles is the latest evidence of the possibility of water -- and life -- on Mars. In 2004, NASA's Spirit and Opportunity rovers found soil that had been exposed to water, and in 2008, the Phoenix Mars Lander uncovered the existence of current water-ice.
Here on Earth, scientists recently found a bacteria living in the Canadian arctic permafrost, in conditions previously thought to be totally inhospitable to any kind of life. The findings led the scientists to wonder whether similar bacteria could exist in the harsh climate of Mars.
© 2012 iScience Times All rights reserved. Do not reproduce without permission. |
The first firearms appeared in Europe in the 14th century. At this time, artillery was first used in wars. Three cannon were used at the Battle of Cré cy in 1346, but they were not very effective. Small cannon were used by the French in 1450 against English and artillery was used in the final campaign in 1453 by Ottomans under Mehmet II to capture Constantinople. Bombards, tubes of brass or copper mounted on wooden sledges, fired stones or darts. Late in the century wrought-iron bombards appeared, firing iron balls. Some huge guns were made in the 15th century. A wrought-iron bombard called "Mons Meg," preserved in Scotland, has a bore of 20 inches (508 mm); it fired a 300-pound (136-kg) stone ball. The first mortars date from this period. By the mid-15th century the French were using long guns called culverins, mounted on wheels.
King Gustavus Adolphus of Sweden in the 17th century aided in the development of a short cast-iron gun that could accompany his troops. He increased rate of fire by having measured charges prepared in advance—the first cartridges. In the 18th century, Frederick the Great made important tactical use of artillery. He massed heavy fortress guns to support his attacks, and at the head of each infantry battalion placed a light six-pounder gun.
When Jean Gribeauval became France's inspector general in 1776, he found a wide variety of guns in use. He standardized horse-drawn gun carriages and introduced aiming devices. His reorganization of artillery was of great benefit to Napoleon I. Huge concentrations of artillery fire aided in winning Napoleon's later victories. Napoleon was the first one who collected his artillery in a grande batterie or big battery and directed his artillery fire on one point in the enemy's line, and then sent troops against that point. Instead of scattering guns among infantry battalions, Napoleon grouped them under division command.
The Americans during the Revolutionary War had little artillery, but they made effective use of guns captured at Ticonderoga and Saratoga. After the war a company of artillery at West Point with a detachment at Fort Pitt (Pittsburgh), was the only army unit retained in service.
In the War of 1812 British forces used rockets developed by William Congreve. These were mentioned by Francis Scott Key in "The Star-Spangled Banner" when he referred to "the rockets' red glare." Rockets were rarely used again until World War II.
Most of the guns in use at the beginning of the American Civil War were muzzle-loading, smoothbore iron cannon little different from those used by Gustavus Adolphus. The gun carriage, called a limber, had an ammunition chest and was usually drawn by six horses hitched in pairs. Accompanying the gun was a caisson, a wheeled vehicle carrying two ammunition chests, also drawn by six horses. Except for the drivers, the gun crew walked alongside. Batteries, usually of light guns, in which all men were mounted on horses were called horse artillery and normally served with cavalry.
Effective breech-loading rifled guns were used by the French in their war against Austria in 1859, but smoothbores continued in use through the Franco-Prussian War of 1870–71. In that war the Prussians massed steel breech-loading guns in the main battle line, a practice that was widely used in World War I. Also during the 19th century there was much development of high explosives.
In 1907 the U.S. Army established the Coast Artillery and Field Artillery as separate arms; the Coast Artillery manned fixed guns in coastal fortifications while the Field Artillery was assigned smaller weapons for use in support of troops.
Artillery was employed on a huge scale in World War I. Shells containing poison gas, as well as high-explosive shells, were used. By the use of range finders, telescopic sights, and other fire-control instruments, artillery could be fired accurately from concealed positions and over the heads of friendly troops. As accuracy improved, guns of various sizes and ranges could concentrate fire on a narrow strip of enemy-held territory in preparation for an attack. This kind of artillery firing was called a barrage. If the curtain of fire was kept moving ahead of advancing troops, it was called a rolling barrage.
In the World War I (1914-1918), the troops who fought on the Western front dug out immense mazes of trenches. The warring sides generally exchanged fire between big-gun batteries. In the trench warfare of World War I, 14-inch (356-mm) naval guns and railway and fixed-mount guns of the Coast Artillery were used behind the lines. An outstanding French weapon was the 75-mm gun. Its superior recoil mechanism permitted rapid fire—a rate of 20 to 25 rounds (shots) a minute. The German Paris Gun (also called "Big Bertha") was an 8.4-inch (213-mm) gun that fired on Paris in 1918 from a distance of 75 miles (110 km), which hurled shells at 15 1/2 miles (24.9 kilometers) above the ground.
Antiaircraft guns used during World War I were mainly conventional artillery pieces on special mounts. After the war, efforts were directed toward developing guns better suited for use against airplanes. The greatest need was for automatic aiming and firing devices. An important development was radar tracking, which came into use in 1941.
By the beginning of World War II nearly all artillery was mechanized; that is, it was designed to be moved quickly from place to place by trucks or tractors. Tanks, which earlier had been armed only with machine guns, carried artillery weapons mounted in armored turrets. As the war progressed, large self-propelled guns came into use and rockets were reintroduced into warfare. Most of the rockets were short-range, small-caliber weapons. An exception was the German V-2, introduced in 1944 as a long-range bombardment missile. Perhaps the most versatile gun of the war was another German weapon—the 88-mm gun; it could be used on a tank, as an antiaircraft gun, or as conventional field artillery.
Fixed guns of the coast artillery type were little used in World War II, partly because it was a war of rapid movement and partly because the airplane proved a more effective weapon for long-range bombardment. On the other hand, the use of smaller weapons proved to be the greatest artillery advances during the war. The war also made use of helicopters to carry artillery into battle in a procedure known as airmobility. As a result, the U.S. Army abolished the Coast Artillery in 1950. All artillery units, including field artillery and antiaircraft artillery, were combined into a single arm. In 1952 the first missile units were added.
The Korean War (1950–53), after its opening stage, was fought from trenches, much like World War I. Artillery was used on a large scale with great precision.
After the Korean War, missiles were developed to such an extent that some consideration was given to abolishing guns entirely. In the guerrilla-type warfare of the Vietnamese War, however, the use of guns in close support of infantry proved to be of continuing effectiveness.
The United States fired the first atomic artillery shell from 280-millimeter cannon on May 29, 1953. Presently, atomic projectiles can be fired from the artillery weapons of smaller calibers.
In 1968 the U.S. Army separated air-defense units from ground-support artillery, creating two branches—Field Artillery and Air Defense Artillery. |
Update November 7, 2014: NASA scientists at the Jet Propulsion Laboratory in Pasadena, California, report that analyses by the MAVEN mission and other Mars-orbiting spacecraft reveal that the red planet likely enjoyed a nighttime meteor shower due to the comet. Thousands of shooting star likely crossed the Martian sky on the evening of October 19. The meteors kicked up enough sparks to essentially create a new layer of charged particles in the high ionosphere surrounding the red planet, according to the University of Iowa’s Don Gurnett, lead investigator on the Mars Advanced Radar for Subsurface and Ionosphere Sounding instrument on the European Space Agency’s Mars Express spacecraft.
Starry-eyed spacecraft on and around Mars have made history, capturing snapshots of a comet swinging close around the red planet. A rover image of the flyby is the first view of a comet taken from the surface of another world.
The once-in-a-million-years event unfolded on Sunday, October 19, as comet Siding Spring brushed past Mars some 87,000 miles (140,000 kilometers) above the planet’s surface. NASA, the European Space Agency (ESA), and India’s space agency made sure to protect the fleet of orbiters there, by positioning them behind the planet to shield the spacecraft from the dust flying off the comet.
The orbiters all remain active and healthy and have begun to stream the images they captured of the comet just before and after its closest approach. At this point, at least two NASA spacecraft, the Mars Exploration Rover Opportunity and the Mars Reconnaissance Orbiter, have successfully imaged the barnstorming comet as it flew past at 125,000 miles (201,000 kilometers) per hour.
High above the planet’s surface, the Mars Reconnaissance Orbiter used its high-resolution cameras to focus on the comet’s bright nucleus and the hazy coma filled with gas and dust surrounding it. The image below has a scale of 453 feet (138 meters) per pixel, revealing that the nucleus was about a third of a mile (half a kilometer) wide—about half the size of previous estimates.
Meanwhile, NASA’s Opportunity rover, which has been exploring the red planet since 2004, captured a ten-second-exposure image (below) about two-and-one-half hours before the closest approach of the comet to Mars. If the rover had waited until the comet reached its closest point to the planet, the skies would have been too bright from the approaching dawn for the rover’s camera.
“It’s excitingly fortunate that this comet came so close to Mars to give us a chance to study it with the instruments we’re using to study Mars,” said Opportunity science team member Mark Lemmon, of Texas A&M University, in a press statement.
“The views from Mars rovers, in particular, give us a human perspective because they are about as sensitive to light as our eyes would be.”
More cometary portraits are expected to be streamed in the coming days from other spacecraft orbiting Mars, including the ESA’s Mars Express and India’s Mars Orbiter Mission.
So stay tuned for more historic images! |
An accelerometer is a type of sensor that measures force due to acceleration of the sensor. A piezoelectric accelerometer utilizes the piezoelectric effect of certain materials to measure dynamic changes in mechanical variables, such as mechanical shock, vibration and acceleration. Piezoelectric accelerometers convert one form of energy into another and provide an electrical signal in response to the condition, property or quantity. Acceleration acts upon a seismic mass that is restrained by a spring or suspended on a cantilever beam, and converts a physical force into an electrical signal.
Browse Details of the Report@
This force is applied directly on to the piezoelectric material, usually crystals, which modify its internal alignment of negative and positive ions and results in accumulation of a charge on the opposite surface. This charge is calculated as the voltage generated by the piezoelectric material or the accelerometer, when being exposed to stress or vibration. Piezoelectric accelerometers have various implementations and applications in industrial devices and applications that rely on the evaluation of mechanical force and vibrations for their operation.
Piezoelectric accelerometers may or may not include integrated signal-conditioning circuitry. Signal-conditioning circuitry receives the raw voltage output from the accelerometer’s piezo sensors. It then converts it into a more suitable signal that’s more readily processed by instrumentation. The Global Piezoelectric Accelerometers Market is segmented on the basis of type, application, and region. On the basis of types, the global market is classified into high and low impedance.
High impedance accelerometers have a charge output that is converted into a voltage using a charge amplifier or external impedance converter. Low impedance units use the same piezoelectric sensing element as high-impedance units, and incorporate a miniaturized built-in charge-to-voltage converter and external power supply coupler to energize the electronics and decouple the consequent DC bias voltage from the output signal. On the basis of forms, the global market is classified into Piezoelectric charge (PE) accelerometers, IEPE accelerometers.
IEPE stands for Integrated Electronics Piezo Electric and defines a class of accelerometer that has built in electronics. Precisely, it defines a class of accelerometer that has low impedance output electronics that works on a two wire constant current supply with a voltage output on a DC voltage bias. On the basis of material, the global market is classified into single crystal (quartz and Rochelle salt), and ceramic materials. On the basis of application, the global market is classified into Aerospace and defense, automotive, pharmaceuticals and chemicals, semicon & electronics, energy/power, general industrial, other. Aerospace comprises modal testing, wind tunnel, and shock tube instrumentation; landing gear hydraulics; rocketry; ejection systems.
Geographically, the global market is segmented into North America (USA, Canada and Mexico), Europe (Germany, France, England, Russia and Italy), Asia Pacific (China, Japan, Korea, India and Southeast Asia), South America, Middle East and Africa. The key players are PCB Piezotronics, Meggitt Sensing Systems, Bruel and Kjaer, Honeywell, KISTLER, Measurement Specialties, and Dytran Instruments.
Browse Related Category Reports @
Hexa Research is a market research and consulting organization, offering industry reports, custom research and consulting services to a host of key industries across the globe. We offer comprehensive business intelligence in the form of industry reports which help our clients obtain clarity about their business environment and enable them to undertake strategic growth initiatives.
More information @ www.hexaresearch.com
Company Name: Hexa Research
Contact Person: Ryan Shaw
Address:Felton Office Plaza, 6265 Highway 9
Country: United States |
Class Penny Quilt
Main Subject Area: Social Studies
Additional Subjects: Language Arts
Duration of Lesson: 45 minutes
Additional Subject Area Standard(s):
Students will appreciate similarities and differences among their peers.
Coin Facts - Large Cents- http://www.coinfacts.com/large_cents/large_cents.html
Drawing paper cut into squares (size can be determined by teacher)
A circle pattern (about 3 centimeters in diameter)
Pennies (enough for each student to be able to trace)
Coins Used in Lesson:
Grade Level(s): K-2
1. Explain to your class that together they are going to make a type of penny quilt. Each student will make a square that will be a part of a class quilt.
2. Each student will need to think of something that happened in their life that was very important to them. They can brainstorm different ideas together as a class. This would be a good homework assignment to do with their family at home.
3. Once your students have decided on their “significant event” they need to determine the year it happened. On their paper, the students need to write their event.
4. In the center of their quilt piece, have your students trace a circulating penny and date it with the year of the significant event.
5. Then, the students can use a circle pattern about the size of a large cent (about 3 centimeters in diameter) to create a design on their quilt square.
6. Each student can share with the class their significant event and the year it happened. The “penny quilt” can be displayed in the class.
Assessment / Evaluation:
Differentiated Learning Options: |
Protecting Children's Health During and After Natural Disasters
Children’s Health in the Aftermath of Floods
Children are different from adults. They may be more vulnerable to chemicals and organisms they are exposed to in the environment because:
- Children’s nervous, immune response, digestive and other bodily systems are still developing and are more easily harmed;
- Children eat more food, drink more fluids, and breathe more air than adults in proportion to their body size – so it is important to take extra care to ensure the safety of their food, drink and air;
- They way children behave – such as crawling and placing objects in their mouths – can increase their risk of exposure to chemicals and organisms in the environment.
Choose from the topics below to learn more about potential hazards to children's health after floods:
- Carbon Monoxide
- Contaminated Water
- Extreme Heat
- Household Items Contaminated by Floodwaters
- Other Flood Topics
- Clinician Recommendations Regarding Return of Children to Areas Impacted by Flooding and/or Hurricanes: A Joint Statement from the Pediatric Environmental Health Specialty Units and the American Academy of Pediatrics (PDF) Exit(3 pp, 73K)
- EPA's Flooding Web Page
- Insect Repellent
- Volcanic Ash
Also, find Pediatric Environmental Health Specialty Units (PEHSU's) in your area, learn more about PEHSU's Exit, or download a copy of this information (PDF) (5 pp, 307K, About PDF).
After homes have been flooded, moisture can remain in drywall, wood furniture, cloth, carpet, and other household items and surfaces and can lead to mold growth. Exposure to mold can cause hay-fever-like reactions (such as stuffy nose, red, watery or itchy eyes, sneezing) to asthma attacks. It is important to dry water-damaged areas and items within 24-48 hours to prevent mold growth. Buildings wet for more than 48 hours will generally contain visible and extensive mold growth.
Some children are more susceptible than others to mold, especially those with allergies, asthma and other respiratory conditions. To protect your child from mold exposure, you can clean smooth, hard surfaces such as metal and plastics with soap and water and dry thoroughly. Flood water damaged items made of more absorbent materials cannot be cleaned and should be discarded. These items include paper, cloth, wood, upholstery, carpets, padding, curtains, clothes, stuffed animals, etc.
If there is a large amount of mold, you may want to hire professional help to cleanup the mold. If you decide to do the cleanup yourself, please remember:
- Clean and dry hard surfaces such as showers, tubs, and kitchen countertops.
- If something is moldy, and can't be cleaned and dried, throw it away.
- Use a detergent or use a cleaner that kills germs.
- Do not mix cleaning products together or add bleach to other chemicals.
- Wear an N-95 respirator, goggles, gloves so that you don't touch mold with your bare hands, long pants, a long-sleeved shirt, and boots or work shoes.
Homes or apartments that have sustained heavy water damage will be extremely difficult to clean and will require extensive repair or complete remodeling. We strongly advise that children not stay in these buildings. Find more mold resources or read EPA's brochure, "Flood Cleanup and the Air in Your Home (PDF)" (15 pp, 1.1MB, About PDF).
NEVER use portable generators indoors! Place generators outside and as far away from buildings as possible. Do not put portable generators on balconies or near doors, vents, or windows and do not use them near where you or your children are sleeping. Due to loss of electricity, gasoline- or diesel-powered generators may be used in the aftermath of floods. These devices release carbon monoxide, a colorless, odorless and deadly gas. Simply opening doors and windows or using fans will not prevent carbon monoxide buildup in the home or in partially enclosed areas such as a garage. In 2001 and 2002, an average of nearly 1,000 people died from non-fire-related carbon monoxide poisoning, and 64% of nonfatal carbon monoxide exposures occurred in the home.
If your children or anyone else in your family starts to feel sick, dizzy or weak or experiences a headache, chest pain or confusion, get to fresh air immediately and seek medical care as soon as possible. Your child’s skin under the fingernails may also turn cherry-red if he/she has been exposed to high levels of carbon monoxide. Fetuses and infants are especially vulnerable to the life-threatening effects of carbon monoxide.
Install a carbon monoxide detector that is Nationally Recognized Testing Laboratory (NRTL) approved (such as UL Exit). These are generally available at local hardware stores. Carbon Monoxide is lighter than air, so detectors should be placed closer to the ceiling. Detectors should be placed close enough to sleeping areas to be heard by sleeping household members.
Learn more about carbon monoxide from the National Institute for Occupational Safety and Health, the Centers for Disease Control and Prevention, and FIRST ALERT® Exit.
While all people need safe drinking water, it is especially important for children because they are more vulnerable to harm from contaminated water. If a water source may be contaminated with flood waters, children, pregnant women and nursing mothers should drink only bottled water, which should also be used to mix baby formula and for cooking. We also recommend you sponge bathe your children with warm bottled water until you are certain your tap water is safe to drink.
Your child may or may not show symptoms or become ill from swallowing small amounts of contaminated water. Symptoms can vary by contaminant. If your child drinks water contaminated with disease-causing organisms, he/she may come down with symptoms similar to the “stomach flu.” These include stomach ache, nausea, vomiting, and diarrhea, and may cause dehydration.
Some contaminants, such as pesticides and gasoline, may cause the water to smell and taste strange, and others such as lead and disease-causing organisms may not be detectable. Drinking water contaminated with chemicals such as lead or gasoline may not cause immediate symptoms or cause your child to become ill but could still potentially harm your child’s developing brain or immune system.
Because you cannot be sure if the water is safe until private wells are professionally tested or city water is certified as safe by local officials, we urge parents to take every precaution to make sure their child’s drinking water is safe.
If you have a flooded well, do NOT turn on the pump, and do NOT flush the well with water. Contact your local or state health department or agriculture extension agent for specific advice on disinfecting your well. View more information on how to manage a flooded well.
Your public water system or local health agency will inform you if you need to boil water prior to using it for drinking and cooking. View additional information about emergency disinfection of drinking water.
Tap water that has been brought to a rolling boil for at least 1 minute will kill disease-causing organisms. Boiling will not remove many potentially harmful chemicals, and may actually increase concentrations of heavy metals (including lead), which can be harmful to a child’s developing immune system. Chemically treating tap water with either chlorine or iodine will kill many disease-causing organisms, but will not remove harmful chemicals or heavy metals.
Household Items Contaminated by Floodwaters
Drinking Water Containers: Clean thoroughly with soap and water, then rinse. For gallon-sized containers, add approximately 1 teaspoon of bleach to a gallon of water to make a bleach solution. Cover the container and agitate the bleach solution thoroughly, allowing it to contact all inside surfaces. Cover and let stand for 30 minutes, then rinse with potable water.
Kitchenware and Utensils: In general, metal and glazed ceramic that are thoroughly washed and dried can be sanitized and kept. Follow local public health guidance on effective and safe sanitation procedures. Wood items must be thrown away, as these items can absorb contaminants or grow mold from the exposure to flood water and they cannot be properly sanitized.
Children's Toys and Baby items: Throw away ALL soft or absorbent toys because it is impossible to clean them and they could harm your child. Throw away ALL baby bottles, nipples, and pacifiers that have come in contact with flood waters or debris.
Other Flood Topics
Teenagers: Teens are still growing and developing, especially their reproductive, nervous and immune systems. Teens are less likely to understand dangers and may underestimate the dangers of certain situations, or they may be reluctant to voice their concerns about potential dangers. Whenever possible, teens should not participate in post-flood clean-up that would expose them to contaminated water, mold and hazardous chemicals. Older teens may help adults with minor clean-ups if they wear protective gear including goggles, heavy work gloves, long pants, shirts, socks, boots and a properly fitting N-95 respirator.
Older Adults and People Living with Chronic Diseases: Flooding often leads to the development of micro-organisms and the release of dangerous chemicals in the air and water. Older adults and people living with chronic diseases are especially vulnerable to these contaminants.
Bleach: Household bleach contains chlorine, a very corrosive chemical which can be harmful if swallowed or inhaled. It is one of the most common cleaners accidentally swallowed by children. Children – especially those with asthma – should not be in the room while using these products. Call Poison Control at (800) 222-1212 immediately in case of poisoning.
Formerly Flooded or Debris-filled Areas: Children in these areas may be at risk of exposure to dirt and debris that may have been contaminated with hazardous chemicals like lead, asbestos, oil and gasoline. Children can be exposed by direct contact through their skin, by breathing in dust particles or fumes, or by putting their hands in their mouths.
Mosquitoes and Disease-Causing Pests: Flood water may increase the number of mosquitoes and other disease-causing pests. To protect your child, ensure that they use insect repellents containing up to 30% Deet, Picardin, or Oil of Lemon Eucalyptus. The American Academy of Pediatrics recommends that Deet not be used on infants less than 2 months of age and that Oil of Lemon Eucalyptus not be used on children under 3 years of age. Other ways to protect children include staying indoors while the sun is down, wearing light colored, long sleeved shirts and pants, covering baby carriages and playpens with mosquito netting, and clear standing water or empty flower pots, etc of water.
Pediatric Environmental Health Specialty Units
To access experts children’s environmental health issues related to flooding, please contact the Pediatric Environmental Health Specialty Units in your area:
The following links exit the site Exit
Region 7 - Iowa, Kansas, Missouri, Nebraska and nine Tribal Nations – (800) 421-9916
Region 10 - Alaska, Idaho, Oregon, Washington, and Native Tribes – (877) 543-2436
Heat-related illnesses are common, yet preventable on hot days. Children and pregnant women need to take extra precautions to avoid overheating on days of extreme heat. Dehydration, heat stroke, and other heat illnesses may affect a child or pregnant woman more severely than the average adult. Download a copy of this information (PDF) (2 pp, 80K, About PDF).
Why are children more susceptible to extreme heat?
- Physical characteristics – Children have a smaller body mass to surface area ratio than adults, making them more vulnerable to heat-related morbidity and mortality. Children are more likely to become dehydrated than adults because they can lose more fluid quickly.
- Behaviors – Children play outside more than adults, and they may be at greater risk of heat stroke and exhaustion because they may lack the judgment to limit exertion during hot weather and to rehydrate themselves after long periods of time in the heat. There are also regular reports of infants dying when left in unattended vehicles, which suggests a low awareness of the dangers of heat events.
How do I know if my child is dehydrated?
- Decreased physical activity
- Lack of tears when crying
- Dry mouth
- Irritability and fussiness
What should I do if my child has become dehydrated?
- Have the child or infant drink fluid replacement products
- Allow for rehydration to take a few hours, over which children should stay in a cool, shaded area and sip fluids periodically
- Call your doctor if symptoms do not improve or if they worsen
How do I know if my child has suffered a heat stroke?Heat stroke, a condition in which the body becomes overheated in a relatively short span of time, can be life-threatening and requires immediate medical attention.
- Skin is flushed, red and dry
- Little or no sweating
- Deep breathing
- Dizziness, headache, and/or fatigue
- Less urine is produced, of a dark yellowish color
- Loss of consciousness
What should I do if my child has suffered a heat stroke?
- Immediately remove child from heat and place in a cool environment
- Place child in bath of cool water and massage skin to increase circulation (do not use water colder than 60F – may restrict blood vessels)
- Take child to hospital or doctor as soon as possible
How can children be protected from the effects of extreme heat?
- Hydration – Make sure children are drinking plenty of fluids while playing outside, especially if they are participating in sports or rigorous physical activity. Fluids should be drunk before, during and after periods of time in extreme heat.
- Staying indoors – Ideally, children should avoid spending time outdoors during periods of extreme heat. Playing outside in the morning or evenings can protect children from dehydration or heat exhaustion. Never leave a child in a parked car, even if the windows are open.
- Light clothing – Children should be dressed in light, loose-fitting clothes on extremely hot days. Breathable fabrics such as cotton are ideal because sweat can evaporate and cool down the child’s body.
How do I care for my infant during hot weather?
- Check your baby’s diaper for concentrated urine, which can be a sign of dehydration.
- If your infant is sweating, he or she is too warm. Remove him or her from the sun immediately and find a place for the baby to cool down.
- Avoid using a fan on or near your baby; it dehydrates them faster.
- A hat traps an infant’s body heat and should only be worn in the sun to avoid sunburn.
- Never leave an infant in a parked car, even if the windows are open.
Why are pregnant woman especially at risk during periods of extreme heat?
An increase in the core body temperature of a pregnant woman may affect the fetus, especially during the first trimester.
How can pregnant women protect themselves from the effects of extreme heat?
- Wear light loose fitting clothing
- Stay hydrated by drinking six to eight glasses of water a day
- Avoid caffeine, salt, and alcohol
- Balance fluids by drinking beverages with sodium and other electrolytes
- Limit midday excursions when temperatures are at their highest
- Call doctor or go to emergency room if woman feels dizzy, short of breath, or lightheaded
Wildfires expose children to a number of environmental hazards, e.g., fire, smoke, psychological
conditions, and the byproducts of combustion. After a wildfire, children may be exposed to a different set of
environmental hazards involving not only their homes, but also nearby structures, land, and
- Fact Sheets on Health Risks of Wildfires for Children, from PESHU
- Wildfire Smoke: A Guide for Public Health Officials (PDF) (53 pp, 2MB)
Volcanic ash consists of tiny pieces of rock and glass that is spread over large areas by wind. During volcanic ash fall, people should take measures to avoid unnecessary exposure to airborne ash and gases. View basic information about volcano safety.
Short-term exposure to ash usually does not cause significant health problems for the general public, but special precautions should be taken to protect susceptible people such as infants and children. Most volcanic gases such as carbon dioxide and hydrogen sulfide blow away quickly. Sulfur dioxide is an irritant volcanic gas that can cause the airways to narrow, especially in people with asthma. Precaution should be taken to ensure that children living close to the volcano or in low-lying areas (where gases may accumulate) are protected from respiratory and eye irritation.
While children face the same health problems from volcanic ash particles suspended in the air as adults (namely respiratory and irritation of the nose, throat, and eyes), they may be more vulnerable to exposure due to their smaller physical size, developing respiratory systems, and decreased ability to avoid unnecessary exposure. Small volcanic ash particles - those less than 10 micrometers in diameter - pose the greatest health concern because they can pass through the nose and throat and get deep into the lungs. This size range includes fine particles, with diameters less than 2.5 micrometers, and coarse particles, which range in size from 2.5 to 10 micrometers in diameter. Particles larger than 10 micrometers do not usually reach the lungs, but they can irritate your eyes, nose, and throat. The volcanic ash may exacerbate the symptoms of children suffering from existing respiratory illnesses such as asthma, cystic fibrosis, or tuberculosis.
Precautions for Children if Ash is Present
- Always pay attention to warnings and obey instructions from local authorities.
- Check the Air Quality Index forecast for your area.
- Stay alert to news reports about volcanic ash warnings.
- Keep children indoors.
- Children should avoid running or strenuous activity during ash fall. Exertion leads to heavier breathing which can draw ash particles deeper into the lungs.
- Parents may want to plan indoor games and activities that minimize activity when ash is present.
- If your family must be outdoors when there is ash in the air, they should wear a disposable mask. If no disposable masks are available, make-shift masks can be made by moistening fabric such as handkerchiefs to help to block out large ash particles.
- Volcanic ash can irritate the skin; long-sleeved shirts and long pants should be worn if children must go outdoors.
- Children should not play in areas where ash is deep or piled-up, especially if they are likely to roll or lie in the ash piles.
- Children should wear glasses instead of contact lens to avoid eye irritation.
- Create a “clean room” where children sleep and play to help to minimize exposure to ash in indoor air.
- Keep windows and doors closed. Close any vents or air ducts (such as chimneys) that may allow ash to enter the house.
- Run central air conditioners on the "recirculate" option (instead of "outdoor air intake"). Clean the air filter to allow good air flow indoors.
- Avoid vacuuming as it will stir up ash and dust into the air.
- Do not smoke or burn anything (tobacco, candles, incense) inside the home. This will create more indoor pollutants.
- If it is too warm or difficult to breathe inside with the windows closed, seek shelter elsewhere.
- A portable room air filter may be effective to remove particles from the air.
- Choosing to buy an air cleaner is ideally a decision that should be made before a smoke/ash emergency occurs. Going outside to locate an appropriate device during an emergency may be hazardous, and the devices may be in short supply.
- An air cleaner with a HEPA filter, an electrostatic precipitator (ESP), or an ionizing air cleaner may be effective at removing air particles provided it is sized to filter two or three times the room air volume per hour.
- Avoid ozone generators, personal air purifiers, "pure-air" generators and "super oxygen" purifiers as these devices emit ozone gas into the air at levels that can irritate airways and exacerbate existing respiratory conditions. These devices are also not effective at removing particles from the air.
Pediatric Environmental Health Specialty Units (PEHSU). To access children's environmental health issues experts, please contact the Pediatric Environmental Health Specialty Unit (PEHSU) in your area . PEHSU in EPA Region 10 - Alaska, Idaho, Oregon, Washington, and Native Tribes -- (877) 543-2436.
For More Information
Anchorage Air Quality Volcano Information
Centers for Disease Control and Prevention (CDC) Key Facts About Volcanic Eruptions
Alaska Division of Homeland Security and Emergency Management: Volcanic Ash Preparedness
Air Quality Index (AIR Now) Local Advisories
U.S. EPA Guide to Air Cleaners in the Home (PDF) (12 pp, 247K)
The Health Hazards of Volcanic Ash: A Guide for the Public (PDF) (10 pp, 545K)
Guidelines on Preparedness Before, During and After an Ashfall (PDF) (10 pp, 499K)
Download a copy of this information (PDF) (2 pp, 137K, About PDF). |
El Nino Southern Oscillation (ENSO)
General: El Niño episodes (left hand column) reflect periods of exceptionally warm sea surface temperatures across the eastern tropical Pacific. La Niña episodes (right hand column) represent periods of below-average sea-surface temperatures across the eastern tropical Pacific. These episodes typically last approximately 9-12 months. Sea-surface temperature (top) and departure (bottom) maps for December - February during strong El Niño and La Niña episodes are shown above.
Detailed: During a strong El Niño ocean temperatures can average 2°C – 3.5°C (4°F - 6°F) above normal between the date line and the west coast of South America (bottom left map). These areas of exceptionally warm waters coincide with the regions of above-average tropical rainfall. During La Niña temperatures average 1°C - 3°C (2°F - 6°F) below normal between the date line and the west coast of South America. This large region of below-average temperatures coincides with the area of well below-average tropical rainfall.
El Niño and La Niña the tropical rainfall, wind, and air pressure
patterns over the equatorial Pacific Ocean are most strongly linked to
underlying sea-surface temperatures, and vice versa, during
December-April. During this period the El Niño and La Niña conditions
strongest, and have the strongest impacts on U.S. weather
El Nino and La Niña episodes typically last approximately 9-12 months. They often begin to form during June-August, reach peak strength during December-April, and then decay during May-July of the next year. However, some prolonged episodes have lasted 2 years and even as long as 3-4 years. While their periodicity can be quite irregular, El Niño and La Niña occurs every 3-5 years on average.
The Southern Oscillation and its Link to the ENSO Cycle
The fluctuations in ocean temperatures during El Niño and La Niña are accompanied by even larger-scale fluctuations in air pressure known as the Southern Oscillation. The negative phase of the Southern Oscillation occurs during El Niño episodes, and refers to the situation when abnormally high air pressure covers Indonesia and the western tropical Pacific and abnormally low air pressure covers the eastern tropical Pacific. In contrast, the positive phase of the Southern Oscillation occurs during La Niña episodes, and refers to the situation when abnormally low air pressure covers Indonesia and the western tropical Pacific and abnormally high air pressure covers the eastern tropical Pacific. These opposite phases of the Southern Oscillation are shown above. |
In this course we will discuss numerous educational concepts. We will focus on answering questions that are imperative to making a decision of whether or not you want to pursue a career as a "Teacher." Within the course we will focus on answering the most important questions:
This is an excellent course for anyone interested in learning more about the field of education. At the end of this course students will know for sure if education is the profession for them. For parents, it will help them to understand the educational system, how it works, and the positive approaches to develop the best educational system possible for their child.
Education 1100 provides an introduction to teaching as a profession in the American educational system. It offers a variety of perspectives on education, including historical, philosophical, social, legal, and ethical issues in a diverse society. The course includes organizational structure and school governance.
Students will need to complete 15 hours of field work in a K-12 Public School and write three reflection papers after completing every 5 hours of observation. In addition, students will take online tests after reading the text, lecture, and power points developed to increase understanding; be involved in online discussions; write a research paper on the topic of their choice, and develop their philosophy of education by answering four basic questions.
A fifteen clock hour field experience is required.
If you are considering enrolling for the internet version of Education 1100, be sure to review the helpful links below regarding COD Online, including: |
A wire has a resistance R. If the length of a wire is doubled and the radius is doubled, what is the new resistance in terms of R?
1 Answer | Add Yours
The resistance of a resistor is given by the formula `R = rho*l/A ` where `rho` is the resistivity of the material the resistor is made of, l is the length of the resistor and A is the cross-sectional area.
In the problem, the resistance of the wire is initially equal to R. `R = rho*l/(pi*r^2)` . The length of the wire is doubled and so is the radius of its cross-section. This increases the cross sectional area to four times the initial area. `R' = rho*(2*l)/(pi*(2*r)^2)` = `rho*(2*l)/(pi*(4*r^2))` = `rho*l/(pi*r^2)*2/4`
The new resistance of the wire is `R*(2/4) = R/2` .
The change in the dimensions of the wire reduces the resistance to half the initial value.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes |
An aneurysm is an abnormal widening or ballooning of a portion of an artery due to weakness in the wall of the blood vessel.
A thoracic aortic aneurysm occurs in the part of the body's largest artery (the aorta) that passes through the chest.
Aortic aneurysm - thoracic; Syphilitic aneurysm; Aneurysm - thoracic aortic
The most common cause of a thoracic aortic aneurysm is hardening of the arteries (atherosclerosis). This condition is more common in people with high cholesterol, long-term high blood pressure, or who smoke.
Other risk factors for a thoracic aneurysm include:
Aneurysms develop slowly over many years. Most patients have no symptoms until the aneurysm begins to leak or expand. The aneurysm may be found only when imaging tests are done for other reasons.
Symptoms often begin suddenly when:
If the aneurysm presses on nearby structures, the following symptoms may occur:
Other symptoms may include:
The physical examination is often normal unless a rupture or leak has occurred.
Most thoracic aortic aneurysms are detected by tests performed for other reasons, usually a chest x-ray, echocardiogram, or a chest CT scan or MRI. A chest CT scan shows the size of the aorta and the exact location of the aneurysm.
An aortogram (a special set of x-ray images made when dye is injected into the aorta) can identify the aneurysm and any branches of the aorta that may be involved.
There is a risk that the aneurysm may open up (rupture) if you do not have surgery to repair it.
The treatment depends on the location of the aneurysm. The aorta is made of three parts:
For patients with aneurysms of the ascending aorta or aortic arch:
For patients with aneurysms of the descending thoracic aorta:
The long-term outlook for patients with thoracic aortic aneurysm depends on other medical problems, such as heart disease, high blood pressure, and diabetes, which may have caused or contributed to the condition.
Serious complications after aortic surgery can include:
Death soon after the operation occurs in 5 - 10% of patients.
Complications after aneurysm stenting include damage to the blood vessels supplying the leg, which may require another operation.
Tell your doctor if you have:
To prevent atherosclerosis:
Tracci MC, Cherry KJ. The aorta. In: Townsend CM Jr, Beauchamp RD, Evers BM, Mattox KL. Sabiston Textbook of Surgery. 19th ed. Philadelphia, Pa: Saunders Elsevier; 2012:chap 62.
Cheng D, Martin J, Shennib H, et al. Endovascular aortic repair versus open surgical repair for descending thoracic aortic disease: a systematic review and meta-analysis of comparative studies. J Am Coll Cardiol. 2010:55(10):986-1001.
Isselbacher EM. Diseases of the aorta. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 78. |
When using Word XP lists are available for you to create. Learn how to make them stand out in this free lesson.
In our often busy lives, we sometimes need help remembering all of the things we have to do each day. Making lists of tasks, purchases, bills, and other important items helps to keep the day organized. Word XP offers an easy-to-use formatting tool that allows you to create such lists.
Word lets you make two types of lists: bulleted and numbered. Bulleted and numbered lists help to simplify steps or items. Teachers often use bulleted lists to highlight important pieces of their lessons. Manuals often include numbered lists to assist readers in step-by-step instruction.
A bullet is usually a black circle, but it can be any other symbol used to highlight items in a list. Use bullets to list items that do not have to be in any particular order.
Things I need to take on vacation:
Numbers (or letters) are used when information must be in a certain order.
You can use the default bullets and numbering settings by clicking on the appropriate button on the Formatting toolbar.
To learn more about creating numbered and bulleted lists, visit our Word XP tutorial. |
Radioactive iodine, I-131, can both cause and treat thyroid cancer. This and other isotopes were recognized as a cancer risk in the survivors of the nuclear bombs at Hiroshima and Nagasaki. Researchers found that higher doses can be used to destroy malignant thyroid tissue that may remain after surgical treatment. I-131 is produced in nuclear medicine reactors, and has a half-life of about 8 days. It decays into the inert gas xenon by emitting an electron and a gamma ray.
Radiation therapy for most cancers is done with an external beam. Thyroid cancer is exceptional because the tissue absorbs most of the iodine in the diet. Thus, radioactive iodine can deliver energy to the tumor cells very efficiently. The electrons are absorbed mostly by the tumor, while the gamma rays have a longer range.
A recent article reports that the optimum dose of radioiodine is a topic for discussion. There are two ways to calculate it: (1) bone marrow dose limited, or (2) lesion-based. Dosages from 1.1 to 21.4 GBq (gigabecquerels) have been reported. The Becquerel is a radioactive decay rate, equal to one disintegration per second.
Radioiodine therapy is simple and painless; the patient simply swallows the prepared dose. Most of it will be absorbed by thyroid tissue. The part that is not absorbed will be excreted in the urine over a period of about 2 days. Half of the absorbed dose will decay over the first 8 days. Over the next 8 days, half of the remaining amount will decay. This process continues until the last atom has decayed: the radioactive decay rate drops by half over each time span of the half-life (8 days for this isotope).
The patient will emit enough radiation to set off security detectors at airports and federal buildings for approximately months after treatment. Pregnancy is not recommended for at least six months to one year. Breast feeding is not allowed after radioiodine therapy. The patient should sleep alone and avoid prolonged contact with others for at least three or four days.
Clearly, there are risks to high doses of I-131. When you are radioactive enough to endanger others, you are subjecting your entire body to ionizing radiation. However, metastatic cancer is a serious risk as well. The authors of Reference 1 note there is evidence for both over-treatment and under-treatment, and more attention is needed to individualized therapy.
1. Lassmann M et al, “Dosimetry and thyroid cancer: the individual dosage of radioiodine”,
Endocrine-Related Cancer 2010; 17:R161 – R172.
2. Radiation information from the Centers for Disease Control: http://www.bt.cdc.gov/radiation/pdf/measurement.pdf
3. More information online: |
Researchers have developed a new way to understand how decomposing plants and animals contribute to ecosystems.
Eric Benbow, a forensic entomologist and microbial ecologist at Michigan State University who led the new research, first defined the necrobiome, the collective organisms both big and small that help plants and animals decay, in 2013. Together with his collaborators, they established a baseline of organisms that play key roles in carrion decomposition.
Now, a new paper in the journal Ecological Monographs establishes a necrobiome encyclopedia to bridge different aspects of ecological theory and also promote the importance of death in ecosystems. The research also effectively establishes the same framework to examine decaying plant and animal communities while acknowledging their key differences and mechanisms.
This detailed study covers the spectrum of decomposition processes, from decaying seaweed to a catastrophe, such as an entire animal herd dying en masse, Benbow says.
“Decomposer communities are critical, yet there’s no standard framework to conceptualize their complex and dynamic interactions across both plant and animal necromass, which limits our comprehensive understanding of decomposition,” he says.
“Our findings also have implications for defining and testing paradigms related to nutrient recycling, gene flow, population dynamics, and other ecosystem processes at the frontier of ecological research,” Benbow explains.
Discovering how decomposition communities interact with each other and how they drive nutrient and carbon cycling could lead to fundamental shifts in ecosystem science, Benbow adds.
A recent New York Times article featured an area’s transformation when lightning killed 300 reindeer in Norway. The carcasses drew carnivores, birds, maggots, and microbes. Jen Pechal, a forensic entomologist and microbial ecologist who the article quoted, called the Norwegian site a hyperlocal “decomposition island,” which created massive diversity in a short span of time.
One change in the area resulted in greater plant diversity. Birds feasting on the carrion dropped feces filled with crowberry seeds. The reindeer remains created the perfect soil for crowberry seedlings—an important food source for many animals in the region—to flourish.
Promoting the necrobiome lexicon in the scientific community also can open the door for new areas of research. Take, for example, the two seemingly unrelated concepts of distilling liquor and food security. Distilleries generate mash as a waste product. Rather than seeing a waste byproduct that needs to be disposed, entrepreneurs could view the mash through a lens of new product development.
There are insects that thrive on decaying mash, consuming and converting it, and then they can be dried and transformed into animal feed. Or, in many countries outside the US, the insects themselves could be processed for human consumption.
“Our research and this study establish a common language and conceptual tools that can lead to new product discovery,” Benbow says. “We’re eliminating organic matter and turning it into a value-added product that can add to the world-food cycle. Understanding the species and the mechanisms, which are essentially recycled, can contribute to establishing food security.”
Additional researchers from Australian National University, the USDA, the University of Georgia, the University of Idaho, Texas A&M University, and Mississippi State University contributed to the work.
Source: Michigan State University |
UNIVERSITY OF BIRMINGHAM
Fresh evidence that water can change from one form of liquid into another, denser liquid, has been uncovered by researchers at the University of Birmingham and Sapienza Università di Roma.
This ‘phase transition’ in water was first proposed 30 years ago in a study by researchers from Boston University. Because the transition has been predicted to occur at supercooled conditions, however, confirming its existence has been a challenge. That’s because at these low temperatures, water really does not want to be a liquid, instead it wants to rapidly become ice. Because of its hidden status, much is still unknown about this liquid-liquid phase transition, unlike about everyday examples of phase transitions in water between a solid or vapour phase and a liquid phase.
This new evidence, published in Nature Physics, represents a significant step forward in confirming the idea of a liquid-liquid phase transition first proposed in 1992. Francesco Sciortino, now a professor at Sapienza Università di Roma, was a member of the original research team at Boston University and is also a co-author of this paper.
The team has used computer simulations to help explain what features distinguish the two liquids at the microscopic level. They found that the water molecules in the high-density liquid form arrangements that are considered to be “topologically complex”, such as a trefoil knot (think of the molecules arranged in such a way that they resemble a pretzel) or a Hopf link (think of two links in a steel chain). The molecules in the high-density liquid are thus said to be entangled.
In contrast, the molecules in the low-density liquid mostly form simple rings, and hence the molecules in the low-density liquid are unentangled.
Andreas Neophytou, a PhD student at the University of Birmingham with Dr Dwaipayan Chakrabarti, is lead author on the paper. He says: “This insight has provided us with a completely fresh take on what is now a 30-year old research problem, and will hopefully be just the beginning.”
The researchers used a colloidal model of water in their simulation, and then two widely used molecular models of water. Colloids are particles that can be a thousand times larger than a single water molecule. By virtue of their relatively bigger size, and hence slower movements, colloids are used to observe and understand physical phenomena that also occur at the much smaller atomic and molecular length scales.
Dr Chakrabarti, a co-author, says: “This colloidal model of water provides a magnifying glass into molecular water, and enables us to unravel the secrets of water concerning the tale of two liquids.”
Professor Sciortino says: “In this work, we propose, for the first time, a view of the liquid-liquid phase transition based on network entanglement ideas. I am sure this work will inspire novel theoretical modelling based on topological concepts.”
The team expect that the model they have devised will pave the way for new experiments that will validate the theory and extend the concept of ‘entangled’ liquids to other liquids such as silicon.
Pablo Debenedetti, a professor of chemical and biological engineering at Princeton University in the US and a world-leading expert in this area of research, remarks: “This beautiful computational work uncovers the topological basis underlying the existence of different liquid phases in the same network-forming substance.” He adds: “In so doing, it substantially enriches and deepens our understanding of a phenomenon that abundant experimental and computational evidence increasingly suggests is central to the physics of that most important of liquids: water.”
Christian Micheletti, a professor at International School for Advanced Studies in Trieste, Italy, whose current research interest lies in understanding the impact of entanglement, especially knots and links, on the static, kinetics and functionality of biopolymers, remarks: “With this single paper, Neophytou et al. made several breakthroughs that will be consequential across diverse scientific areas. First, their elegant and experimentally amenable colloidal model for water opens entirely new perspectives for large-scale studies of liquids. Beyond this, they give very strong evidence that phase transitions that may be elusive to traditional analysis of the local structure of liquids are instead readily picked up by tracking the knots and links in the bond network of the liquid. The idea of searching for such intricacies in the somewhat abstract space of pathways running along transient molecular bonds is a very powerful one, and I expect it will be widely adopted to study complex molecular systems.”
Sciortino adds: “Water, one after the other, reveals its secrets! Dream how beautiful it would be if we could look inside the liquid and observe the dancing of the water molecules, the way they flicker, and the way they exchange partners, restructuring the hydrogen bond network. The realisation of the colloidal model for water we propose can make this dream come true.”
The research was supported by the Royal Society via International Exchanges Award, which enabled the international collaboration between the researchers in the UK and Italy, the EPSRC Centre for Doctoral Training in Topological Design and the Institute of Advanced Studies at the University of Birmingham, and the Italian Ministero Istruzione Università Ricerca – Progetti di Rilevante Interesse Nazionale.
METHOD OF RESEARCH
SUBJECT OF RESEARCH
ARTICLE PUBLICATION DATE |
This Alcohol: Grades 9-12 lesson plan also includes:
Two activities ask high schoolers to consider the role of alcohol culture in their lives. First, groups analyze the types of appeals used in newspaper ads for alcoholic drinks and compare those images with what they have observed. Individuals then consider ways to deal with alcohol-related peer pressure.
- Divide the class into seven groups, assign each group a different article to analyze, and have them report their findings to the whole class
- Research the support programs available on campus and in the community including safe ride programs, and post the information in the classroom
- Ask class members to bring in copies of old newspapers and magazines
- Requires copies of the "Tricks of the Trade" worksheet
- Part of the Health Problems series of lessons designed for high schoolers
- The activities ask learners to consider the effects of the alcohol culture as presented in the media |
Our planet may be home to a staggering 1 trillion microbial species – 99.999 percent of which remain unearthed, according to a new study.
Two biologists from Indiana University – associate professor Dr. Jay Lennon and postdoc fellow Dr. Kenneth Locey – created the biggest microorganism database of its kind by using combined animal, plant, and microbial data from government, academic, and citizen science records. The output: an estimate of more than 5.6 million identified species from 35,000 places across the world except Antarctica.
They then applied mathematical scaling laws to predict the number of species at a certain landscape. They discovered that across communities of both microscopic and larger plant/animal groups, the same scaling laws applied: as the number of organisms in a given community increased, the number of species grew.
"Until now, we haven't known whether aspects of biodiversity scale with something as simple as the abundance of organisms. As it turns out, the relationships are not only simple but powerful, resulting in the estimate of upwards of 1 trillion species,” said Locey.
Lennon said estimating the number of species on the planet is among biology’s greatest challenges today, with new genetic sequencing methods only recently offering a large pool of new information.
“We’ve done a pretty good job of cataloguing macrobes . . . but the rate we are exploring new [plants and animals] is slowing down,” he told Christian Science Monitor, also citing that it is only in the last two to three decades that scientists learned how to identify microbes.
Microbes are the most abundant life on Earth, meaning biologists are still missing out on a huge chunk of their population. These species include single-celled organisms like bacteria and archaea and certain types of fungi, with 10,000 varying kinds of bacteria on 1 square centimeter (0.155 square inch) of a human arm at any time.
Moreover, humans greatly rely on these microbes for a rich array of functions, including digestion, nutrient cycling, and clean water. There could be many more roles these organisms play in everyday life but haven’t been recognized as such.
The significantly undersampling of microorganism has led to new efforts in the last few years, including the collection of the Human Microbiome Project of the National Institutes of Health and the Tara Oceans Expedition’s marine microorganisms.
“[This research] highlights how much of that diversity still remains to be discovered and described,” echoed Simon Malcomber of the National Science Foundation’s Dimensions of Biodiversity, which funded the study. Despite this gap, he added, about 40 percent of the global economy keeps depending on biological resources.
Malcomber delved on the “sheer diversity” of microorganisms around and how little is known about them, encouraging increased efforts in documentation and understanding their myriad roles in world ecosystems.
There is a gargantuan – if not outright impossible – task ahead of identifying every microbial species on the planet. The international, interdisciplinary initiative Earth Microbiome Project, for example, has catalogued less than 10 million species so far.
Of these recorded species, only around 10,000 have ever been lab-cultivated, while fewer than 100,000 have classified sequences, Lennon expanded, their results indicating that 100,000 times more microbes are awaiting discovery.
Canadian professor Dr. Laura Hug said genome-based techniques in the last 20 years will help science dig deeper into microbial DNA. Recently, Dr. Hug also published a paper identifying 1,000 previously undetected microbes – also made possible by a scientific breakthrough.
The findings were published Monday in the journal Proceedings of the National Academy of Sciences. |
- Colorado Plateau
The Colorado Plateau, also called the Colorado Plateau Province, is a physiographic region of the Intermontane Plateaus, roughly centered on the Four Corners region of the southwestern United States. The province covers an area of 337,000 km2 (130,000 mi2) within western Colorado, northwestern New Mexico, southern and eastern Utah, and northern Arizona. About 90% of the area is drained by the Colorado River and its main tributaries: the Green, San Juan, and Little Colorado.
The Colorado Plateau is largely made up of deserts, with scattered areas of forests. In the southwest corner of the Colorado Plateau lies the Grand Canyon of the Colorado River. Much of the Plateau's landscape is related, in both appearance and geologic history, to the Grand Canyon. The nickname "Red Rock Country" suggests the brightly colored rock left bare to the view by dryness and erosion. Domes, hoodoos, fins, reefs, goblins, river narrows, natural bridges, and slot canyons are only some of the additional features typical of the Plateau.
The Colorado Plateau has the greatest concentration of national parks in the United States. Among its parks are Grand Canyon National Park, Zion National Park, Bryce Canyon National Park, Capitol Reef National Park, Canyonlands National Park, Arches National Park, Mesa Verde National Park, and Petrified Forest National Park. Among the national monuments are Dinosaur National Monument, Hovenweep National Monument, Wupatki National Monument, Grand Staircase-Escalante National Monument, Natural Bridges National Monument, Canyons of the Ancients National Monument, and Colorado National Monument.
The province is bounded by the Rocky Mountains in Colorado, and by the Uinta Mountains and Wasatch Mountains branches of the Rockies in northern and central Utah. It is also bounded by the Rio Grande Rift, Mogollon Rim and the Basin and Range. Isolated ranges of the Southern Rocky Mountains such as the San Juan Mountains in Colorado and the La Sal Mountains in Utah intermix into the central and southern parts of the Colorado Plateau. It is composed of seven sections:
- Uinta Basin Section
- High Plateaus Section
- Grand Canyon Section
- Canyon Lands Section
- Navajo Section
- Datil-Mogollon Section
- Acoma-Zuni Section
As the name implies, the High Plateaus Section is, on average, the highest section. North-south trending normal faults that include the Hurricane, Sevier, Grand Wash, and Paunsaugunt separate the section's component plateaus. This fault pattern is caused by the tensional forces pulling apart the adjacent Basin and Range province to the west, making this section transitional.
Development of the province has in large part been influenced by structural features in its oldest rocks. Part of the Wasatch Line and its various faults form the western edge of the province. Faults that run parallel to the Wasatch Fault that lies along the Wasatch Range form the boundaries between the plateaus in the High Plateaus Section. The Uinta Basin, Uncompahgre Uplift, and the Paradox Basin were also created by movement along structural weaknesses in the region's oldest rock.
- Awapa Plateau
- Aquarius Plateau
- Kaiparowits Plateau
- Markagunt Plateau
- Paunsaugunt Plateau
- Sevier Plateau
- Fishlake Plateau
- Pavant Plateau
- Gunnison Plateau and the
- Tavaputs Plateau.
Some sources also include the Tushar Mountain Plateau as part of the Colorado Plateau, but others do not. The mostly flat-lying sedimentary rock units that make up these plateaus are found in component plateaus that are between 1500 m (5000 ft) to over 3350 m (11,000 ft) above sea level. A supersequence of these rocks is exposed in the various cliffs and canyons (including the Grand Canyon) that make up the Grand Staircase. Increasingly younger east-west trending escarpments of the Grand Staircase extend north of the Grand Canyon and are named for their color:
Within these rocks are abundant mineral resources that include uranium, coal, petroleum, and natural gas. Study of the area's unusually clear geologic history (which is laid bare due to the arid and semiarid conditions) has greatly advanced that science.
A rain shadow from the Sierra Nevada far to the west and the many ranges of the Basin and Range means that the Colorado Plateau receives 15 to 40 cm (6 to 16 in.) of annual precipitation. Higher areas receive more precipitation and are covered in forests of pine, fir, and spruce.
Though it can be said that the Plateau roughly centers on the Four Corners, Black Mesa in northern Arizona is much closer to the east-west, north-south midpoint of the Plateau Province. Lying southeast of Glen Canyon and southwest of Monument Valley at the north end of the Hopi Reservation, this remote coal-laden highland has about half of the Colorado Plateau's acreage north of it, half south of it, half west of it, and half east of it.
The Ancestral Puebloan People lived in the region from around 2000 to 700 years ago.
A party from Santa Fe led by Fathers Dominguez and Escalante, unsuccessfully seeking an overland route to California, made a five-month out-and-back trip through much of the Plateau in 1776-1777.
U.S. Army Major and geologist John Wesley Powell explored the area in 1869 and 1872 despite having lost one arm in the American Civil War. Using fragile boats and small groups of men the Powell Geographic Expedition charted this largely unknown region of the United States for the federal government.
Construction of the Hoover Dam in the 1930s and the Glen Canyon Dam in the 1960s changed the character of the Colorado River. Dramatically reduced sediment load changed its color from reddish brown (Colorado is Spanish for "colored" referring to its red color) to mostly clear. The apparent green color is from algae on the riverbed's rocks, not from any significant amount of suspended material. The lack of sediment has also starved sand bars and beaches but an experimental 12 day long controlled flood from Glen Canyon Dam in 1996 showed substantial restoration. Similar floods are planned for every 5 to 10 years.
One of the most geologically intriguing features of the Colorado Plateau is its remarkable stability. Relatively little rock deformation such as faulting and folding has affected this high, thick crustal block within the last 600 million years or so. In contrast, provinces that have suffered severe deformation surround the plateau. Mountain building thrust up the Rocky Mountains to the north and east and tremendous, earth-stretching tension created the Basin and Range province to the west and south. Sub ranges of the Southern Rocky Mountains are scattered throughout the Colorado Plateau.
The Precambrian and Paleozoic history of the Colorado Plateau is best revealed near its southern end where the Grand Canyon has exposed rocks with ages that span almost 2 billion years. The oldest rocks at river level are igneous and metamorphic and have been lumped together as "Vishnu Basement Rocks"; the oldest ages recorded by these rocks fall in the range 1950 to 1680 million years. An erosion surface on the "Vishnu Basement Rocks" is covered by sedimentary rocks and basalt flows, and these rocks formed in the interval from about 1250 to 750 million years ago: in turn, they were uplifted and split into a range of fault-block mountains. Erosion greatly reduced this mountain range prior to the encroachment of a seaway along the passive western edge of the continent in the early Paleozoic. At the canyon rim is the Kaibab Formation, limestone deposited in the late Paleozoic (Permian) about 270 million years ago.
A 12,000 to 15,000 ft. (3700 to 4600 m) high extension of the Ancestral Rocky Mountains called the Uncompahgre Mountains were uplifted and the adjacent Paradox Basin subsided. Almost 4 mi. (6.4 km) of sediment from the mountains and evaporites from the sea were deposited (see geology of the Canyonlands area for detail). Most of the formations were deposited in warm shallow seas and near-shore environments (such as beaches and swamps) as the seashore repeatedly advanced and retreated over the edge of a proto-North America (for detail, see geology of the Grand Canyon area). The province was probably on a continental margin throughout the late Precambrian and most of the Paleozoic era. Igneous rocks injected millions of years later form a marbled network through parts of the Colorado Plateau's darker metamorphic basement. By 600 million years ago North America had been leveled off to a remarkably smooth surface.
Throughout the Paleozoic Era, tropical seas periodically inundated the Colorado Plateau region. Thick layers of limestone, sandstone, siltstone, and shale were laid down in the shallow marine waters. During times when the seas retreated, stream deposits and dune sands were deposited or older layers were removed by erosion. Over 300 million years passed as layer upon layer of sediment accumulated.
It was not until the upheavals that coincided with the formation of the supercontinent Pangea began about 250 million years ago that deposits of marine sediment waned and terrestrial deposits dominate. In late Paleozoic and much of the Mesozoic era the region was affected by a series of orogenies (mountain-building events) that deformed western North America and caused a great deal of uplift. Eruptions from volcanic mountain ranges to the west buried vast regions beneath ashy debris. Short-lived rivers, lakes, and inland seas left sedimentary records of their passage. Streams, ponds and lakes created formations such as the Chinle, Moenave, and Kayenta in the Mesozoic era. Later a vast desert formed the Navajo and Temple Cap formations and dry near-shore environment formed the Carmel (see geology of the Zion and Kolob canyons area for details).
The area was again covered by a warm shallow sea when the Cretaceous Seaway opened in late Mesozoic time. The Dakota Sandstone and the Tropic Shale were deposited in the warm shallow waters of this advancing and retreating seaway. Several other formations were also created but were mostly eroded following two major periods of uplift.
The Laramide orogeny closed the seaway and uplifted a large belt of crust from Montana to Mexico, with the Colorado Plateau region being the largest block. Thrust faults in Colorado are thought to have formed from a slight clockwise movement of the region, which acted as a rigid crustal block. The Colorado Plateau Province was uplifted largely as a single block, possibly due to its relative thickness. This relative thickness may be why compressional forces from the orogeny were mostly transmitted through the province instead of compacting it. Pre-existing weaknesses in Precambrian rocks were reactivated by the compression. It was along these ancient faults and other deeply-buried structures that much of the province's relatively small and gently-inclined flexures (such as anticlines, synclines, and monoclines) formed. Some of the prominent isolated mountain ranges of the Plateau, such as Ute Mountain and the Carrizo Mountains, both near the Four Corners, are cored by igneous rocks that were intruded about 70 million years ago, during the Laramide orogeny.
Minor uplift events continued through the start of the Cenozoic era and were accompanied by some basaltic lava eruptions and mild deformation. The colorful Claron Formation that forms the delicate hoodoos of Bryce Amphitheater and Cedar Breaks was then laid down as sediments in cool streams and lakes (see geology of the Bryce Canyon area for details). The flat-lying Chuska Sandstone was deposited about 34 million years ago; the sandstone is predominantly of eolian origin and locally more than 500 meters thick. The Chuska Sandstone caps the Chuska mountains, and it lies unconformably on Mesozoic rocks deformed during the Laramide orogeny.
Younger igneous rocks form spectacular topographic features. The Henry Mountains, La Sal Range, and Abajo Mountains, ranges that dominate many views in southeastern Utah, are formed about igneous rocks that were intruded in the interval from 20 to 31 million years: some igneous intrusions in these mountains form laccoliths, a form of intrusion recognized by Grove Karl Gilbert during his studies of the Henry Mountains. Ship Rock (also called Shiprock), in northwestern New Mexico, and Church Rock and Agathla, near Monument Valley, are erosional remnants of potassium-rich igneous rocks and associated breccias of the Navajo Volcanic Field, produced about 25 million years ago. The Hopi Buttes in northeastern Arizona are held up by resistant sheets of sodic volcanic rocks, extruded about 7 million years ago. More recent igneous rocks are concentrated nearer the margins of the Colorado Plateau. The San Francisco Peaks near Flagstaff, south of the Grand Canyon, are volcanic landforms produced by igneous activity that began in that area about 6 million years ago and continued until 1064 C.E., when basalt erupted in Sunset Crater National Monument. Mount Taylor, near Grants, New Mexico, is a volcanic structure with a history similar to that of the San Francisco Peaks: a basalt flow closer to Grants was extruded only about 3000 years ago (see El Malpais National Monument). These young igneous rocks may record processes in the Earth's mantle that are eating away at deep margins of the relatively stable block of the Plateau.
Tectonic activity resumed in Mid Cenozoic time and started to unevenly uplift and slightly tilt the Colorado Plateau region and the region to the west some 20 million years ago (as much as 3 kilometers of uplift occurred). Streams had their gradient increased and they responded by downcutting faster. Headward erosion and mass wasting helped to erode cliffs back into their fault-bounded plateaus, widening the basins in-between. Some plateaus have been so severely reduced in size this way that they become mesas or even buttes. Monoclines form as a result of uplift bending the rock units. Eroded monoclines leave steeply tilted resistant rock called a hogback and the less steep version is a cuesta.
Great tension developed in the crust, probably related to changing plate motions far to the west. As the crust stretched, the Basin and Range province broke up into a multitude of down-dropped valleys and elongate mountains. Major faults, such as the Hurricane Fault, developed that separate the two regions. The dry climate was in large part a rainshadow effect resulting from the rise of the Sierra Nevada further west. Yet for some reason not fully understood, the neighboring Colorado Plateau was able to preserve its structural integrity and remained a single tectonic block.
A second mystery was that while the lower layers of the Plateau appeared to be sinking, overall the Plateau was rising. The reason for this was discovered upon analyzing data from the USARRAY project. It was found that the asthenosphere had invaded the overlying lithosphere. The asthenosphere erodes the lower levels of the Plateau. At the same time, as it cools, it expands and lifts the upper layers of the Plateau. Eventually, the great block of Colorado Plateau crust rose a kilometer higher than the Basin and Range. As the land rose, the streams responded by cutting ever deeper stream channels. The most well-known of these streams, the Colorado River, began to carve the Grand Canyon less than 6 million years ago in response to sagging caused by the opening of the Gulf of California to the southwest.
The Pleistocene epoch brought periodic ice ages and a cooler, wetter climate. This increased erosion at higher elevations with the introduction of alpine glaciers while mid-elevations were attacked by frost wedging and lower areas by more vigorous stream scouring. Pluvial lakes also formed during this time. Glaciers and pluvial lakes disappeared and the climate warmed and became drier with the start of Holocene epoch.
Electrical power generation is one of the major industries that takes place in the Colorado Plateau region. Most electrical generation comes from coal fired power plants.
The rocks of the Colorado Plateau are a source of oil and a major source of natural gas. Major petroleum deposits are present in the San Juan Basin of New Mexico and Colorado, the Uinta Basin of Utah, the Piceance Basin of Colorado, and the Paradox Basin of Utah, Colorado, and Arizona.
The Colorado Plateau holds major uranium deposits, and there was a Uranium boom in the 1950s. (See Uranium mining in Utah and Uranium mining in the United States). The Atlas Uranium Mill near Moab has left a problematic tailings pile for cleanup.
Major coal deposits are being mined in the Colorado Plateau in Utah, Arizona, Colorado, and New Mexico, though large coal mining projects, such as on the Kaiparowits Plateau, have been proposed and defeated politically. The ITT Power Project, eventually located in Lynndyl, Utah, near Delta, was originally suggested for Salt Wash near Capitol Reef National Park. After a firestorm of opposition, it was moved to a less beloved site. In Utah the largest deposits are in aptly named Carbon County. In Arizona the biggest operation is on Black Mesa, supplying coal to Navajo Power Plant.
Gilsonite and uintatite
Perhaps the only one of its kind, a gilsonite plant near Bonanza, southeast of Vernal, Utah, mines this unique, lustrous, brittle form of asphalt, for use in "varnishes, paints,...ink, waterproofing compounds, electrical insulation,...roofing materials."
Huge deposits of oil shale, primarily in the northeastern Colorado Plateau, lie waiting for improved technology to tap their riches.
The scenic appeal of this unique landscape had become, well before the end of the twentieth century, its greatest financial natural resource. The amount of commercial benefit to the four states of the Colorado Plateau from tourism exceeded that of any other natural resource.
This relatively high semi-arid province produces many distinctive erosional features such as arches, arroyos, canyons, cliffs, fins, natural bridges, pinnacles, hoodoos, and monoliths that, in various places and extents, have been protected. Also protected are areas of historic or cultural significance, such as the pueblos of the Anasazi culture. There are nine U.S. National Parks, a National Historical Park, sixteen U.S. National Monuments and dozens of wilderness areas in the province along with millions of acres in U.S. National Forests, many state parks, and other protected lands. In fact, this region has the highest concentration of parklands in North America. Lake Powell, in foreground, is not a natural lake but a reservoir impounded by Glen Canyon Dam.
National parks (from south to north to south clockwise):
- Petrified Forest National Park
- Grand Canyon National Park
- Zion National Park
- Bryce Canyon National Park
- Capitol Reef National Park
- Canyonlands National Park
- Arches National Park
- Black Canyon of the Gunnison National Park
- Mesa Verde National Park
- Chaco Culture National Historical Park
National Monuments (alphabetical):
- Aztec Ruins National Monument
- Canyon De Chelly National Monument
- Canyons of the Ancients National Monument
- Cedar Breaks National Monument
- Colorado National Monument
- Grand Canyon-Parashant National Monument
- Grand Staircase-Escalante National Monument
- El Malpais National Monument
- El Morro National Monument
- Hovenweep National Monument
- Navajo National Monument
- Natural Bridges National Monument
- Rainbow Bridge National Monument
- Sunset Crater National Monument
- Vermilion Cliffs National Monument
- Walnut Canyon National Monument
- Wupatki National Monument
- Kachina Peaks Wilderness
- Strawberry Crater Wilderness
- Kendrick Mountain Wilderness
- Beaver Dam Mountains Wilderness
- Paiute Wilderness
- Grand Wash Cliffs Wilderness
- Mount Logan Wilderness
- Mount Trumbull Wilderness
- Kanab Creek Wilderness
- Cottonwood Point Wilderness
- Paria Canyon-Vermilion Cliffs Wilderness
- Saddle Mountain Wilderness
- Mount Baldy Wilderness
- Escudilla Wilderness
- Black Ridge Canyons Wilderness
- Flat Tops Wilderness
- Uncompahgre Wilderness
- Mount Sneffels Wilderness
- Lizard Head Wilderness
- Weminuche Wilderness
- South San Juan Wilderness
- Cebolla Wilderness
- Ojito Wilderness
- West Malpais Wilderness
- Bisti/De-Na-Zin Wilderness
- Pine Valley Mountain Wilderness
- Ashdown Gorge Wilderness
- Box-Death Hollow Wilderness
- Dark Canyon Wilderness
- High Uintas Wilderness
Other notable protected areas include: Glen Canyon National Recreation Area, Dead Horse Point State Park, Goosenecks State Park, the San Rafael Swell, the Grand Gulch Primitive Area, Kodachrome Basin State Park, Goblin Valley State Park and Barringer Crater.
Sedona, Arizona and Oak Creek Canyon lie on the south-central border of the Plateau. Many but not all of the Sedona area's cliff formations are protected as wilderness. The area has the visual appeal of a national park, but with a small, rapidly growing town in the center.
- ^ Leighty, Dr. Robert D. = (2001). "Colorado Plateau Physiographic Province". Contract Report. Defense Advanced Research Projects Agency (DOD) Information Sciences Office. http://www.tec.army.mil/publications/ifsar/lafinal08_01/five/5.1.5_frame.htm. Retrieved 2007-12-25.
- ^ Kiver, Eugene P. and David V. Harris, 1999, Geology of U.S. Parklands, Wiley, 5th ed., page 395, ISBN 0-471-33218-6
- ^ Geology of U.S. Parklands, page 367, figure 8-1
- ^ a b Hawley, John W.. "New Mexico’s Environment, Physiographic Provinces". http://www.nmmastergardeners.org/Pdf%20FILES/NM's%20Environment.pdf. Retrieved 2007-12-25.
- ^ Geology of U.S. Parklands, page 366
- ^ a b c Geology of U.S. Parklands, page 376
- ^ Geology of U.S. Parklands, page 369
- ^ Geology of U.S. Parklands, page 369
- ^ Geology of U.S. Parklands, page 374, "Trouble in Paradise"
- ^ Gregory Crampton, Standing Up Country, Alfred Knopf, NY, 1964, pp. 43-46
- ^ Geology of U.S. Parklands, page 375
- ^ For the whole paragraph except where noted: Geology of U.S. Parklands, page 383, "Precambrian and Paleozoic"
- ^ Geology of U.S. Parklands, page 383
- ^ "Why is the Colorado Plateau Rising?". Geology.com. http://geology.com/press-release/colorado-plateau/. Retrieved 9 May 2011.
- ^ Utah: A Guide to the State. 1982. p. 590.
- ^ Geology of U.S. Parklands, page 365
- Kiver, Eugene P. and David V. Harris, 1999, Geology of U.S. Parklands, Wiley, 5th ed., ISBN 0-471-33218-6
- Donald L. Baars, Red Rock Country: The Geologic History of the Colorado Plateau, Doubleday (1972), hardcover, ISBN 0-385-01341-8
- Donald L. Baars, Traveler's Guide to the Geology of the Colorado Plateau, University of Utah Press (2002), trade paperback, 250 pages, ISBN 0-87480-715-8
- W. Scott Baldridge, Geology of the American Southwest: A Journey Through Two Billion Years of Plate-Tectonic History, Cambridge University Press (2004), 280 pages, ISBN 0-521-01666-5
- Crampton, C. Gregory, Standing Up Country: The Canyon Lands of Utah and Arizona, Rio Nuevo Publishers (September 2000), ISBN 1-887896-15-5
- Fillmore, Robert. Geological Evolution of the Colorado Plateau of Eastern Utah and Western Colorado. University of Utah Press (2011). ISBN 978-1-60781-004-9
- Geology of National Parks: Fifth Edition, Ann G. Harris, Esther Tuttle, Sherwood D., Tuttle (Iowa, Kendall/Hunt Publishing; 1997), pages 2–3, 19-20, 25 ISBN 0-7872-5353-7
- Physical Geology: Eight Edition, Plummer, McGeary, Carlson, (McGraw-Hill: Boston; 1999), page 320 ISBN 0-697-37404-1
- Earth System History, Steven M. Stanley, (W.H. Freeman and Company; 1999), pages 511-513, 537 ISBN 0-7167-2882-6
- USGS - Geologic Provinces of the United States: Colorado Plateau Province (some adapted public domain text)
- Annabelle Foos, Geology of the Colorado Plateau, National Park Service PDF Accessed 12/21/2005.
- Ward Roylance, Utah: A Guide to the State, Utah: A Guide to the State Foundation; Salt Lake City; 1982; 779 pp
- Look, Al, 1947, A Thousand Million Years on the Colorado Plateau, Golden Bell Publications, Fifth printing 1971, 300 pages.
- The Bright Edge, Subtitle: Guide to the National Parks of the Colorado Plateau.
Geography History · Index/Outline · Portal Branches Techniques and tools SocietiesAmerican Geographical Society · Association of American Geographers · European Geography Association · Geographical Association · Hong Kong Geographical Association · International Geographical Union · National Geographic Society · Royal Canadian Geographical Society · Royal Geographical Society · Royal Scottish Geographical Society · Russian Geographical Society · Saudi Geographical Society · Société de Géographie · Society of Woman Geographers Geographers World deserts Africa Asia
- Aral Karakum
- Badain Jaran
- Dasht-e Kavir
- Dasht-e Lut
- Dasht-e Margoh
- Dasht-e Naomid
- Indus Valley
- Kyzyl Kum
- Rub' al Khali
- Russian Arctic
- Ustyurt Plateau
- Wahiba Sands
Europe North America
- Baja California
- Black Rock
- Channeled scablands
- Forty Mile
- Gran Desierto de Altar
- Great Basin
- Great Salt Lake
- Great Sandy
- Jornada del Muerto
- North American Arctic
- Painted Desert
- Red Desert
- Smoke Creek
- Tule (Arizona)
- Tule (Nevada)
Australia South America Polar regions New Zealand State of Colorado Denver (capital) Topics Regions
Central Colorado · Colorado Piedmont · Colorado Plateau · Denver-Aurora Metropolitan Area · Eastern Plains · Front Range · Grand Valley · High Rockies · Mineral Belt · Roaring Fork Valley · Sangre de Cristo Mountains · San Luis Valley · South‑Central Colorado · Southwest Colorado · Uinta Mountains · Western Slope
Akron · Alamosa · Arvada · Aspen · Aurora · Boulder · Breckenridge · Brighton · Broomfield · Cañon City · Castle Rock · Centennial · Colorado Springs · Commerce City · Cortez · Craig · Delta · Denver · Durango · Englewood · Erie · Evans · Fairplay · Federal Heights · Fort Collins · Fort Morgan · Fountain · Golden · Glenwood Springs · Grand Junction · Greeley · Greenwood Village · Gunnison · Lafayette · La Junta · Lakewood · Lamar · Leadville · Littleton · Longmont · Louisville · Loveland · Montrose · Northglenn · Parker · Platteville · Pueblo · Salida · Steamboat Springs · Sterling · Superior · Thornton · Trinidad · Vail · Westminster · Wheat Ridge · Windsor
Adams · Alamosa · Arapahoe · Archuleta · Baca · Bent · Boulder · Broomfield · Chaffee · Cheyenne · Clear Creek · Conejos · Costilla · Crowley · Custer · Delta · Denver · Dolores · Douglas · Eagle · El Paso · Elbert · Fremont · Garfield · Gilpin · Grand · Gunnison · Hinsdale · Huerfano · Jackson · Jefferson · Kiowa · Kit Carson · La Plata · Lake · Larimer · Las Animas · Lincoln · Logan · Mesa · Mineral · Moffat · Montezuma · Montrose · Morgan · Otero · Ouray · Park · Phillips · Pitkin · Prowers · Pueblo · Rio Blanco · Rio Grande · Routt · Saguache · San Juan · San Miguel · Sedgwick · Summit · Teller · Washington · Weld · Yuma
State of Utah Salt Lake City (capital) Topics Society Regions Largest cities
American Fork | Bountiful | Cedar City | Clearfield | Cottonwood Heights | Draper | Holladay | Kaysville | Layton | Lehi | Logan | Midvale | Murray | Ogden | Orem | Pleasant Grove | Provo | Riverton | Roy | St. George | Salt Lake City | Sandy | South Jordan | South Salt Lake | Spanish Fork | Springville | Taylorsville | Tooele | West Jordan | West Valley City
Counties Attractions State of Arizona Phoenix (capital) Topics Society Regions
Arizona Strip | Arizona Sun Corridor | Coconino Plateau | Colorado Plateau | Grand Canyon | Kaibab Plateau | Mogollon Plateau | Mogollon Rim | Mojave Desert | Monument Valley | North Central Arizona | Northeast Arizona | Northern Arizona | Oak Creek Canyon | Phoenix Metropolitan Area | Safford area | San Francisco Volcanic Field | Sonoran Desert | Southern Arizona | Verde Valley | White Mountains
Counties Cities State of New Mexico Santa Fe (capital) Topics Society
Culture · Demographics · Economy · Education · Politics
Alamogordo · Albuquerque · Artesia · Carlsbad · Clovis · Corrales · Deming · Española · Farmington · Gallup · Grants · Hobbs · Las Cruces · Las Vegas · Los Alamos · Los Lunas · Lovington · Portales · Raton · Rio Rancho · Roswell · Ruidoso · Santa Fe · Silver City · Socorro · Sunland Park · Taos · Tucumcari
Bernalillo · Catron · Chaves · Cibola · Colfax · Curry · De Baca · Doña Ana · Eddy · Grant · Guadalupe · Harding · Hidalgo · Lea · Lincoln · Los Alamos · Luna · McKinley · Mora · Otero · Quay · Rio Arriba · Roosevelt · Sandoval · San Juan · San Miguel · Santa Fe · Sierra · Socorro · Taos · Torrance · Union · Valencia
Wikimedia Foundation. 2010. |
Birds spend roughly 10-25 percent of their time each day preening, but why? Why is this behavior so significant, and how can a bird photographer properly capture its importance?
What Is Preening?
Preening is also called “maintenance behavior” and it is how birds keep their feathers in top condition. Different postures and actions may all be part of preening, from fluffing, stretching, and shaking so feathers are splayed out to nibbling or stroking individual feathers. Scratching can be part of preening feathers a bird can’t reach with its bill, and the different crazy “yoga” poses a bird may contort itself into are all part of preening as the bird rearranges and repositions feathers into their best alignment. Birds may also preen one another, a behavior called “mutual preening” or “allopreening” that could happen between two mated partners or simply among birds in a close-knit, social flock.
Why Preening Matters
A bird’s feathers are critically important to its well-being. While we may see feathers as just a pretty feature or part of a bird’s color or markings, feathers are actually insulating and protecting the bird to regulate its body temperature. Feathers also showcase a bird’s health and strength, which can help it find a mate or defend its territory. Particularly for waterfowl, feathers also provide waterproofing. For all birds, feathers help create a more aerodynamic form for more efficient flight, so the bird does not expend as much energy when flying.
Preening takes care of a bird’s feathers so they can perform all these functions. When a bird preens, it aligns its feathers in the right orientation, interlocking the tiny barbules along the edges of feathers to keep each feather tight together. Preening also removes any parasites, such as mites or lice, from the bird’s feathers, and distributes oil from a gland at the base of the bird’s tail to improve waterproofing. When birds preen each other, the close contact also improves their social or mating bond.
Because birds preen so frequently – as often as every hour when they’re resting – it’s easy to capture the behavior with photos that tell a story of a bird’s activities. Birds often preen while perched, but waterfowl may preen while floating on the water or birds may preen while on the ground as well. When the bird is preening it isn’t likely to take flight immediately, giving a photographer a chance to set up a photo well and capture the intriguing poses and postures this behavior presents.
To capture preening in great detail, it is essential to adjust camera settings for the light level. This will ensure you capture the subtle color differences as feathers shift position, and will capture the best level of detail, especially in ruffled feathers. Clear focus is crucial, and using rapid shutter speeds or shooting in bursts will help capture the fast bill movement as a bird nibbles or strokes its feathers.
For the most charisma in each shot, focus on the bird’s face, particularly its eye. Even if the bird is preening a different part of its body, having that eye in focus will capture the expression of concentration on the bird’s face, adding character and depth to the photo. At the same time, don’t neglect the whole posture of the bird, framing the shot to capture the surroundings as well. With ducks and wading birds, this may also include a reflection on the water as the bird preens, which will add even more dimension to each shot.
Birds take great care to keep their feathers in peak condition, for many good reasons. We should also take great care in capturing preening behavior in amazing photos to showcase this essential behavior. The more we learn about the lives and activities of birds, the more we can appreciate their beauty and enjoy the individuality of each bird we see. |
Height datum relations
The vertical datum of a height reference system is usually determined by the mean sea level, which is estimated by one or more tide gauges of an adjacent sea. The tide gauge stations of the national height systems in Europe are located at various oceans and inland seas: Baltic Sea, North Sea, Mediterranean Sea, Black Sea, Atlantic Ocean. The differences between these sea levels can come up to several decimeters. They are caused by the various separations between the ocean surface and the geoid.
Landlocked countries in particular determined the relation of their height systems to a sea level by leveling observations to neighbouring countries. These measurements were in some cases already done in the 19th century and were accordingly inaccurate.
The below picture shows the reference tide gauges for European national height reference systems and the offsets to EVRF2019 in cm.
In Europe different kinds of heights are used (normal heights, orthometric heights, normal-orthometric heights). They differ in the method, how the leveled height differences are corrected because of the gravity of the Earth. In some European height systems, heights without any gravity correction are used.
The following picture shows the kind of heights, which are in use in European countries.
More detailed information about European coordinate reference systems and their transformation parameters to the pan-European Vertical Reference System EVRS are available at Information System for European Coordinate Reference Systems CRS-EU. |
To convert length x width dimensions from meters to centimeters we should multiply each amount by the conversion factor. One meter equals 100 centimeters, in order to convert 310 x 248 meters to cm we have to multiply each amount of meters by 100 to obtain the length and width in centimeters. In this case to convert 310 x 248 meters into cm we should multiply the length which is 310 meters by 100 and the width which is 248 meters by 100. The result is the following:
310 x 248 meters = 31000 x 24800 centimeters
The meter (symbol: m) is the fundamental unit of length in the International System of Units (SI). It is defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second." In 1799, France start using the metric system, and that is the first country using the metric.
The centimeter (symbol: cm) is a unit of length in the metric system. It is also the base unit in the centimeter-gram-second system of units. The centimeter practical unit of length for many everyday measurements. A centimeter is equal to 0.01 (or 1E-2) meter.
To calculate a meter value to the corresponding value in centimeters, just multiply the quantity in meters by 100 (the conversion factor).
centimeters = meters * 100
The factor 100 is the result from the division 1 / 0.01 (centimeter definition). Therefore, another way would be:
centimeters = meters / 0.01
- How many cm are in 310 by 248 meters?
- 310 x 248 meters is equal to how many cm?
- How to convert 310 meters x 248 meters to cm?
- What is 310 meters by 248 meters in cm?
- How many is 310m x 248m in cm?
meterscm.com © 2022 |
Prehistory refers to the time between when man first emerged and the existence of written records.
The oldest written texts date to sometime between the 26th and 24th centuries B.C.
There are two main theses regarding where modern man (Homo sapiens) originated. What are these differing theories?
- The Out of Africa Thesis posits that Homo sapiens first arose in Africa and began migrating to other parts of the Earth approximately 125,000 years ago.
- The Multiregional Thesis contends that Homo sapiens arose more or less simultaneously in different parts of the globe, and are descended from earlier pre-human groups that left Africa.
According to most anthropologists, how did the first humans arrive in North and South America?
Most anthropologists contend that early man arrived in the Americas via the Bering Land Bridge, which stretches between modern-day Siberia and Alaska.
Lower sea levels brought about by an Ice Age made the bridge a viable pathway to the Americas until 10,000 years ago.
The Stone Age refers to the period between roughly 2.6 million years ago and 2000 B.C. This was a time period when stone was greatly used for building.
Anthropologists typically divide the Stone Age into three periods: Paleolithic, Mesolithic, and Neolithic.
The ___ ___ is the period ranging from roughly 2.6 million years ago to 12,000 years ago.
During the Paleolithic Era, man and man's predecessors began using stone tools and mainly lived in small roaming groups of hunter-gatherers. There is evidence that Paleolithic man believed in an afterlife, as numerous burials with some household goods have been found.
What era saw the beginnings of agriculture?
During the Mesolithic Era (10,000 B.C. to 4000 B.C.), agriculture became prevalent. Semi-permanent small villages were established.
There is also evidence of extensive animal domestication during this period.
What developments characterized the Neolithic Era?
The Neolithic Era, which stretched from 4000 B.C. to 2000 B.C., saw the development of more permanent villages and early cities, many of which contained walls and defensive fortifications.
Plants were further domesticated, public works such as canals were established to assist in agriculture, and animal herding became prevalent.
The mastery of ___ allowed early man to move into colder regions of the planet, such as Northern Europe.
Fire played a crucial role in mankind's settling of the colder regions of the planet. The controlled use of fire dates back to the Middle Paleolithic Era, some 100,000 to 400,000 years ago.
So crucial was the discovery of fire that the Greeks claimed it descended from the gods and was revealed to man by the Titan Prometheus. For giving the gods' secret away, Prometheus was chained to a rock where an eagle would eat his liver for all eternity.
The development of the ___ made possible more rapid transportation as well as inventions such as the chariot and carts.
The development of the wheel (circa 3500 B.C.) was one of the earliest technological improvements made by men. Interestingly, the wheel was not used by the ancient civilizations of North and South America and did not arrive in the Americas until after Columbus.
Nomads are wandering bands of people who move from place to place to support their livelihood. During the Stone Age, hunter-gatherer nomads continually migrated to seek out new hunting grounds.
Domestication refers to the intentional manipulation of plants and animals to make them more useful to humans. For instance, the domestication of sheep and cows provided an early resource for pastoral societies.
The Neolithic Revolution occurred in the late Stone Age, when early man began food production instead of merely gathering food.
Also known as the Agricultural Revolution, the Neolithic Revolution marked the rise of farming, public works projects such as agricultural irrigation, and the beginnings of the first villages.
The Bronze Age began around 2000-3000 B.C. in Mesopotamia, Egypt, and the Indus River Valley, and slightly later in other areas. The Bronze Age marks man's first significant use of writing and metals like bronze and copper, as well as the development of city-states.
Cultural diffusion refers to the sharing of cultures between societies. As an example, agriculture is believed to have begun in the Middle East before being diffused throughout much of Eurasia.
What are the five hallmarks of a civilization?
- Advanced cities dependent in part on trade
- Specialized workers
- Recordkeeping, usually in the form of writing
- Complex institutions, like religion and government
- Advanced technologies, such as metalworking
Where did the first civilization arise?
The first civilization was Sumeria and rose in Mesopotamia, which is the region between the Tigris and the Euphrates Rivers in the south of modern-day Iraq. Sumeria was a collection of city-states and dates from around 4000 B.C.
Where was the Fertile Crescent?
The Fertile Crescent refers to the lands between the Tigris and Euphrates Rivers, stretching down into Palestine. The land between the rivers was exceptionally good for farming, and the region provided the home for many powerful ancient civilizations, including Sumeria, Assyria, and Babylon.
___ created a law code containing various punishments, including the famous "eye for an eye."
A powerful Babylonian emperor, Hammurabi's Code is among the earliest recorded legal systems, and it could be quite harsh. The famous "eye for an eye" required the loss of an eye if one caused someone else to lose an eye, even accidentally.
During what period of history did the Ancient Egyptian civilization prosper along the Nile River?
Ancient Egypt can roughly be dated from 3200 B.C. to 330 B.C.
Which prominent Ancient Egyptian leader is believed to have been the son of the pharaoh Akhenaten (who ruled in the 14th century B.C.)?
Although his reign was brief and unimportant, the 1922 discovery of his intact tomb is one of the greatest archaeological finds in history.
Which Ancient Egyptian pharaoh of the 13th century B.C. is often considered the most powerful pharaoh of the Egyptian Empire?
Ramesses the Great
How did Ancient Egyptian writing differ from that of Mesopotamia?
Mesopotamian writing used symbols to equate to sounds, known as cuneiform. Egyptian writing used pictures to represent sounds -- a system known as hieroglyphics.
Egyptians used ___ to make paper, upon which elaborate texts were composed.
Papyrus is made from reeds. Until the Middle Ages, it remained the dominant means of making paper.
What were the first Chinese civilizations?
The first Chinese civilizations were the Xia and Shang Dynasties. The Xia rose to power around 2000 B.C., with the Shang succeeding about 400 years later.
What were the primary trade goods of the early Chinese?
The two primary trade goods of the early Chinese were silk, which may have made its way across India to the Middle East by 1000 B.C., and jade, a type of precious stone that was often elaborately decorated.
The Shang Dynasty also excelled in producing magnificent bronze works, which adorned Chinese temples.
What civilization is considered to be the Mesoamerican mother civilization?
The Olmecs (1200 B.C. to 400 B.C.) were the first civilization to arise in Mesoamerica.
Much like the Sumerians influenced the Babylonians and the Assyrians, the Olmecs are believed to have influenced later cultures such as the Maya and Aztecs.
The Olmecs are remembered today for their carvings of monumental stone heads, some weighing up to 40 tons. The Olmecs also had large mounds and platforms, which historians and archeologists believe had a religious function and may have involved human sacrifice.
What city-state emerged near modern-day Mexico City around 100 B.C.?
Teotihuacan was the largest city in the pre-Columbian Americas. It was ruled by an oligarchy dedicated to continuing the city-state's polytheistic religion, which included human sacrifice. Historians believe that Teotihuacan's influence waned in the 650s in the wake of internal revolts.
Which culture invented the alphabet?
The Phoenicians, a small empire on the coast of the Mediterranean in modern-day Lebanon and Israel, invented the alphabet around 1050 B.C.
The Phoenicians were a prominent maritime empire, establishing a far-flung trading network that ranged as far as Spain and North Africa.
Which empire was the first to introduce coined money?
The Lydian Empire, located in western Anatolia from the 700s B.C. to the 500s B.C., was the first to introduce coin money, sometime around 610 B.C.
Coins would prove a handy medium of exchange, both because they replaced barter and were easier to transfer from place to place. The Lydians fell to the Persians in 546 B.C.
The Central American ___ civilization developed advanced written language as well as a startlingly accurate calendar.
Maya cities emerged in the 700s B.C. and by 250 A.D., a series of rival city-states and small kingdoms had developed. The Maya kings were also priests, dedicated to appeasing gods by means of human sacrifice. The Maya began to decline around 900, and the last Maya cities fell in the 1600s as the Spanish colonized the Mesoamerican region. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.