content
stringlengths 275
370k
|
---|
What Is It?
Encephalitis is an inflammation of the brain usually caused by infection by a virus. The virus directly infects the brain tissue, causing inflammation and potentially injury to the nervous system.
Viral encephalitis can be primary, meaning it directly involves the brain from the start, or secondary, meaning that it first affects other parts of the body before traveling to the brain. Many forms of the disease are mild in nature and do not cause significant morbidity. However, some forms of encephalitis can be life-threatening and cause significant injury to the nervous system.
There are many viruses that can infect the brain. Some common types include herpes viruses, arboviruses (transmitted by mosquitos, ticks and other insects; examples include Eastern and Western equine virus, St. Louis virus and West Nile virus), rabies, varicella-zoster (a herpes virus that causes chicken pox and shingles), Epstein-Barr virus and measles virus. While some of these viruses are common and widespread (herpes, Epstein-Barr, varicella-zoster, for example), they do not always cause encephalitis. Many people have herpes, VZ and EB their entire life and never have significant infection of the brain. For some types, immune suppression (such as in AIDS or other diseases that cause a weakened immune system) can increase the risk for development of brain involvement by these viruses.
Transmission of these diseases vary considerably depending on the type of virus. While some, such as herpes viruses, are spread from direct human contact, others are spread only by animal vectors, meaning they are spread by insect bites. Risk factors for transmission also vary depending on the type. For example, for the insect-borne viruses, travel or residence in endemic areas are risks. Warmer months of the year tend to increase the risk as mosquitos and other insects are in abundance.
While bacteria can also infect the brain, generally a different term is used to describe these infections. For example, bacterial infection of the brain tissue is sometimes referred to as cerebritis. If the bacterial infection causes a puss-filled cavity in the brain tissue it is referred to as a brain abscess.
What Types of Symptoms Are Typical?
Many types of viral brain infection have very mild symptoms and patients will only experience a mild, flu-like illness that resolves spontaneously. Infection can lead to symptoms such as headache, irritability, lethargy, and fever along with other general flu-like symptoms such as general body aches and pains.
More serious infections can cause more debilitating symptoms, particularly in patients with a weakened immune system. These can include neurological symptoms such as seizures, personality changes, confusion, hallucinations, stupor or coma, paralysis, and tremors. The distribution of these symptoms depend on the areas of brain involved and the type of virus implicated.
How Is The Diagnosis Typically Made?
In a patient who presents with signs and symptoms of encephalitis, a thorough neurological examination is typically performed. Several types of tests may aid in the diagnosis. For example, a lumbar puncture (spinal tap) may be performed to assess for evidence of infection in the nervous system. This may also help to rule out other potential causes of symptoms such as meningitis
. An electroencephalogram
can be useful in demonstrating changes that are consistent with encephalitis. Blood testing and imaging, such as brain MRI
, may also be used to help rule out other causes and narrow down the diagnosis. Rarely, brain biopsy may be performed to sample the infected brain tissue if a diagnosis cannot be made otherwise. Because it can be hard to find definitive evidence of a virus, most mild forms never had a final identification of the causative virus. However, in severe cases an aggressive attempt to identify the virus from the cerebrospinal fluid or brain tissue may be made. Various tests are available to analyze these tissues for the presence and type of virus.
What Are Some Common Treatments?
For most viral infections, there are no specific medications that treat the disease. In mild cases patients are generally instructed to rest, eat well and drink plenty of fluids and to treat symptoms (such as using Tylenol and other medications for headache and fever). These cases are much like recovering from a flu.
In more serious cases patients may be hospitalized. If the patient experiences seizures, anti-seizure medication may be prescribed. Anti-inflammatory drugs may be used to help reduce brain swelling. In the case of herpes viruses (including herpes and varicella-zoster) some antiviral medications may be useful in limiting the severity of disease.
Each patient and each type of encephalitis is different so each patient should consult their own treating physician about the most appropriate management options for their disease.
Done with the Encephalitis page?
Return to the CNS Infection page.
Return to the Nervous System Diseases home page.
This site is not intended to offer medical advice. Every patient is different, and only your personal physician can help to counsel you about what is best for your situation. What we offer is general reference information about various disorders and treatments for your education.
|
ATOMIC clocks, currently the size of fridges, could shrink to the microscale thanks to a new way of measuring the second. The technique could also see aluminium displace caesium as the standard of time.
The world's most accurate atomic clocks are at the National Institute of Standards and Technology (NIST) at Boulder, Colorado. Known as fountain clocks, they send clouds of caesium atoms through a vacuum chamber in a magnetic field. Large atoms like caesium and aluminium have multiple energy levels that are so close together they appear indistinguishable. The magnetic field separates these levels into two "hyperfine" states.
The chamber is also filled with microwaves, which excite the atoms. They then emit light as they drop to the lower hyperfine state. The microwave frequency that maximises this fluorescence is used to define the length of a second, currently the time it takes for 9,192,631,770 cycles of microwave radiation.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
|
|Alphabet > Crafts > Letter
G - Letter Z | Zoo | Animals > Mammals > Endangered > Apes > Gorilla |
Books > Online story time > Books > Animals > Community
Helpers > Zookeeper | Holidays & Events > Jun > Aquarium and Zoo Month | Dec 27 > Visit the Zoo Day
Good Night, Gorilla is almost a "wordless" book that will charm everyone who is young at
heart. It is the type of book that children go back to again and
again as they grow and they discover something new in it each
The zoo theme book introduces children to seven different animals: the
gorilla, a mouse, elephant, lion, giraffe, hyena and armadillo in that
order. They also get to meet the kind zookeeper and the
zookeeper's wife who live just outside the zoo grounds.
Activity 1: "Read" and discuss the book
Good Night, Gorilla by Peggy Rathmann
Depending on the ages of the children you can adjust how to
"read" this book.
- Ask children to describe what they see in the illustrations
and what is happening.
- What are the names of the animals.
- Who is the man with the flashlight?
- What is his career? Zookeeper.
- What time of day is it, daytime or nighttime?
- Where are the animals going as they get out of their cages?
- Why are two pages all black with surprised eyes?
- Who takes the animals back to the zoo?
- What happens next?
- Who do you think ate the banana, the gorilla or the mouse?
- What items are inside the animals cages?
- Take a look at the houses in the neighborhood as the animals
approach, what do you see? Do the houses seem surprised?
Alternative activities: This book
has endless possibilities for discussion and learning. Time can
be devoted to learning a little about each animal over a period of
Activity 2: Sequence and Vocabulary with Good Night, Gorilla
materials needed: Finger Puppets, Small Stand-up Figures or
LARGE GROUP ACTIVITY: seven or more children
1. Print copies in color or black and white for as many children
as you need (one finger puppet per child) and cut images and
finger/stand holder template. Ideally print on construction paper or
card stock. Only one template has been provided for the finger/stand
holder - this one needs to be traced to make as many as you need from
discarded paper or scraps of construction paper. Print the name
of the animal in the image in your language of choice.
2. Place in a basket or bag and have the children pick one out,
this way it is a surprise who they get to be in the story. Now
they can color the puppet if applicable and they may need help having
the puppet placed in finger and taped or they can be used a small
stand up figures. Alternative: Use the images as stickers,
and make a small mobile or small
3. Read the story a second time. Make an extra copy of the
images to place the pictures in a board or felt board in no particular
4. Game Sequence and Recognizing the Name of the Animal:
After reading the book and discussing ask the children if they
remember the sequence in which the animals got out of the cages.
Tell them you are going to play a game. *Educator will ask:
Who got out of the cage first and the children holding that animal
stand up and the rest of the children say, "Good Night, (name of
the animal)." Obviously the gorilla was first, so all the
children with the gorilla finger puppet need to stand up at that point
and they can gather together in groups, until all
children stand up in seven groups (one for each animal - include the
mouse) and they all say Good Night. As each animal is called the educator
can place the pictures in order in the board or felt board.
5. Dramatic Play: Now that the children are in groups
for each animal, encourage each group to try and make sounds and
gestures the animal might make. You can
discuss to make sure that all children have an idea of what
sounds the animals would make: lion would (roar), hyena (laugh),
etc. Tell them to gesture how would the animals go to sleep.
SMALL GROUP (under seven children):
1. Each child will get a complete set of animals. Print and cut
as many copies as you need.
2. The children can color and assemble as stand up
figures. They will need help placing the tape or glue in the last step
of assembly. Idea: Put a little piece of tape on
the back of the images and tape against small Lego blocks (the 1"
size blocks are perfect).
3. Once all the children have their seven figures ready have them
choose the first animal that got out of the cage and so forth and to
place them in front of them in that order. Another method is to color the image only, and use as stickers on a piece of paper
and encourage the children to place them sequence.
Alternative: Make a small vertical banner or a small mobile.
4. Do step 5 above - Dramatic Play.
SNACK TIME FUN: Recipe: Ants
on a Banana Bus
Banana Snack: One of the most funny things about the book
is how this little mouse drags this banana tied to a string throughout
the story. It is a great excuse to make a fun snack and the author, Peggy
Rathmann has a delightful and easy recipe that children can help
Additional Materials: Alternative Activity or Take Home
Night, Gorilla Activity Card PDF format only (Webbing into
This activity card reinforces sequence and also addresses how the
printed word is read from left to write. The instructions
suggested here are a bit different than the ones printed on the
1. Cut the images and review the story by reading and running
your finger under the text. When you reach the highlighted word,
have the child search for the right image and they can glue it over the word. After all images have been glued
read the text again and have the child "read" the
images. This is an activity that educators and child care
providers can send home for parents to do at home.
Activity 3: Alphabet Lesson Plan Printable Activities > Alphabet letter Z is for Zoo
The book is a zoo theme book. This is also a good opportunity to
address letter Z, what is a zoo and the community helper, the
Alphabet letter G is for Gorilla
and learn about gorillas
and more activities and crafts
Activity 4: Animals of Goodnight Gorilla Coloring Book:
Put together a coloring book of the animals in Goodnight, Gorilla > review links in the materials column.
Good Night, Gorilla
Finger puppets |
stand-up figures or stickers
black and white
Make a coloring book with animals and zookeeper in the story:
gorilla 1 or 2
giraffe 1 or 2
zookeeper coloring page
Alphabet Printable Activities:
Letter G Gorilla
Letter Z Zoo
Night, Gorilla Activity Card PDF format
*more gorilla crafts *paper, construction paper or card stock
*something to color with
|
There has been a good deal of attention given to the importance of physical activity in achieving good health in the media. This is largely due to the recent rise in overweight and obesity in population and the vast body of evidence on the benefits of physical activity in weight management and other health effects. Physical activity confers numerous benefits to health through a number of physiological mechanisms. Commissioned as a response to the rising levels of obesity in the U.S., in 1996 the U.S. Department of Health and Human Services Surgeon General’s report on physical activity and obesity was the first to bring to the forefront the health consequences of physical activity (U.S. Department of Health and Human Services, 1996). Based on this and a number of other comprehensive reviews of the literature, physical activity affects a variety of health outcomes:
- All cause mortality
- Cardiovascular disease
- Diabetes mellitus
- Cancer (colon and breast)
- Bone and joint diseases (osteoporosis and osteoarthritis)
- Mental health
Reviews of physical activity interventions suggest that people may be more willing and able to adopt moderate physical activities, once such activities are set in motion they are more inclined to maintain them over time, as compared with other types of vigorous physical activity (Frank and Engelke, 2001). Physical activities that are incorporated into daily life or have an inherent meaning, or lifestyle activities, rather than structured exercise regimens, are good strategy for increasing physical activity (Frank et al, 2003). Even relatively small changes in physical activity can translate into potentially large changes in weight trends at the population level (Morabia & Costanza, 2004). It is estimated that 60 minutes of slow walking and 30 minutes of moderate or brisk walking expends 100 calories for average adults (Morabia & Costanza, 2004). The general consensus is that a total of 30 minutes of moderate to vigorous physical activity, which can be achieved via brisk walking or cycling, on most days of the week, reduces the risk of cardiovascular diseases, diabetes and hypertension, and helps to control blood lipids and body weight (Pate, Pratt, Blair, et al, 1995). These benefits are conferred even if the activities are done in short ten- to fifteen-minute episodes. Thus, physical activity recommendations for adults call for at least 30 minutes of moderate to vigorous activity per day for health benefits. While the benefits of physical activity increase with the intensity and frequency of physical activity, the greatest come when people who have been sedentary engage in some form of physical activity.
According to the Centers for Disease Control, levels of physical activity can be determined in a number of ways. These include:
- Talk Test- One who is engaged in light activity should be able to sing at this intensity; at moderate activity levels, one should be able to carry on a conversation comfortably; if the person becomes winded or no longer able to carry on the conversation comfortably, the activity can be considered vigorous.
- Target Heart Rate and Estimated Maximum Heart Rate- These two values provide a target zone for a person’s heart rate during physical activity and are based on age (220- age). For example, a person’s target heart rate at moderate activity levels should be 50%-70% of his or her maximum heart rate. During vigorous activity levels a person’s target heart rate should be 70% – 85%.
- Perceived Exertion- The Borg Rating of Perceived Exertion is a subjective measure that helps to evaluate how hard an individual feels like his/her body is working. It can provide fairly good estimates of actual heart rates during physical activity.
- Metabolic Equivalent (MET) Level- A unit used to estimate the amount of oxygen used by the body during physical activity.
Downstream Health Effects
Inadequate levels of physical activity have been linked with a number of health outcomes that extend beyond obesity and overweight. These include: hypertension, cancer, diabetes, cardiovascular disease, impaired mental health, and bone and joint disease. As such, physical activity is a critical determinant of health that has the potential to positively or negatively influence a host of health outcomes.
Policies and Other Determinants
Policies can affect physical activity through changes in the built environment, including modifications in transportation, land use, and workplace environment, or through economic incentives for engaging in activity. Physical activity is associated with numerous physiologic and mental health benefits in both the short- and long-term.
- Walkable communities, not parks or fitness facilities, are where most people get most of their daily physical activity. Elements conducive to walkability include: wide, well-maintained sidewalks, signalized street crossings, slower traffic and narrower streets at crossings, street trees, destinations (shops, restaurants, etc.), and security. For more information see Active Living by Design and Project for Public Spaces
- Sprawl is associated with poorer mental health (Sturm and Cohen, 2004). Sprawl has a negative effect on physical activity as it increases travel time and forces people to travel out of their neighborhood for many everyday tasks, such as shopping, eating, working and going to school. To be effective, efforts to control sprawl need to be combined with efforts to promote mixed use development and mass transit
- School policies (and the adherence to such policies) have the ability to increase physical activity in general, and specifically moderate to vigorous activity levels among students. An increase in trained physical activity professionals, smaller class sizes and the elimination of physical education exemptions are just a few examples of how physical activity levels can be positively influenced
|
The discovery of a new bird-like dinosaur from the Jurassic period has challenged the widely accepted theories on the origin of flight.
Co-authored by Dr Gareth Dyke, Senior Lecturer in Vertebrate Palaeontology at the University of Southampton, the paper describes a new feathered dinosaur about 30 cm in length which pre-dates bird-like dinosaurs that birds were long thought to have evolved from.
Over many years, it has become accepted among palaeontologists that birds evolved from a group of dinosaurs called theropods from the Early Cretaceous period of Earth's history, around 120-130 million years ago.
Recent discoveries of feathered dinosaurs from the older Middle-Late Jurassic period have reinforced this theory.
"This discovery sheds further doubt on the theory that the famous fossil Archaeopteryx - or "first bird" as it is sometimes referred to - was pivotal in the evolution of modern birds," Dr Dyke said.
"Our findings suggest that the origin of flight was much more complex than previously thought," Dyke said.
The fossilised remains found in north-eastern China indicate that, while feathered, this was a flightless dinosaur, because of its small wingspan and a bone structure that would have restricted its ability to flap its wings.
The dinosaur also had toes suited to walking along the ground and fewer feathers on its tail and lower legs, which would have made it easier to run.
Dr Gareth Dyke is also Programme Leader for a new one-year MRes in Vertebrate Palaeontology, which offers potential students the chance to study the evolution and anatomy of vertebrates, in order to inform and increase our understanding of the workings of modern day creatures.
The study has been published in Nature Communications. (ANI)
|
Each day, I begin my ELA class with Reading Time. This is a time for students to access a range of texts. I use this time to conference with students, collect data on class patterns and trends with independent reading and to provide individualized support.
Students have previously been exposed to defining credible sources in this lesson. The next step is to give students time to determine credible sources on their own as they continue working towards writing a research project. Students need to work on these concepts on their own so they can begin to internalize them. Since one of the Common Core objectives is determining credible sources, I want students to master this skill on their own. It's enough for me to tell them what a credible source is. They need time to practice the skill on their own. Individual practice is the first step towards mastery. Today's lesson allows students to see credible sources in context of sources they will research.
To start the lesson I refer back to the handouts on determining credible sources from the previous lesson. The first handout, Credible v. Non-Credible Sources, lists six different questions. This resource can be used for lower level students. The other handout is from the web-site Criteria To Evaluate The Credibility of WWW Resources. This handout can be used for higher-level students as it break down the questions into further concepts and has the students think about internet sources in a deeper manner. I think it's important to try and differentiate instruction when possible. It can definitely be challenging by finding little ways can be extremely rewarding. Most of the differentiation I do is similar to this lesson. It may not be on a large scale, but subtle works well for me by passing out different articles, asking different questions, or looking at different assessments. For today's class I briefly review the questions so students can refresh their memory.
The rest of the lesson will allow students to answer these questions from the handouts based on four different web-sites they will find on their own. I model for students the Credibility Chart on the Smartboard. This chart breaks down each question from the handout that was previously discussed. For each question, students answer in the appropriate box for each web-site. The idea is that students will be able to collect data for four different web-sites based on credibility concepts. The chart is a nice visual and forces students to look for the information that will help them to define whether or not the web-sites can be defined as credible. Since students are very visual, students are a bit more inclined to work in this manner instead of writing out each answer separately. Without realizing it, they are on their way towards mastery of the skill of determining credible sources.
Students have class time to continue filling out this chart. This can be a timely process so students will be able to finish this in the next lesson. As students are working on filling out the chart based on the questions from the handout, I circulate to make sure students are on task and to answer any questions that may arise. Many time students ask specific questions about their web-sites so I help them to see how they can locate the necessary information.
|
Summarization and Close Reading of Text
Here is a Summarization Strategy for Reading and Writing called “The Incredible Shrinking Notes”
Students read a section of text either assignned by the teacher or self-selected.
Students write a summary of the reading selection on the large index card. Depending on the age and ability of the students, teachers may need to give guidance as to number of sentences, sentence structure, whether misspellings will be noted, etc…
Students are then given the medium sized card and have to take the information from the large card and condense it onto the medium-sized card.
Finally, students are given the small card and must take the information from the medium sized card and condense it further onto the small card.
Great way for students to get to the main point/idea, engage in CCCS of “close reading of text” and summarization, a skill needed across the curriculum.
**Click the Piktochart above to enlarge. Can be saved to use in your classroom.
|
Before Albert Einstein, notably the German physicist Max Planck had prepared the way for the concept by explaining that objects that emit and absorb light do so only in amounts of energy that are quantized, that means every change of energy can occur only by certain particular discrete amounts and the object cannot change energy in any arbitrary way. The concept of modern photon came into general use after the physicist Arthur H. Compton demonstrated (1923) the corpuscular nature of X-rays. This was the validation that Einstein’s hypothesis that light itself is quantized.
The term photon comes from Greek phōtos, “light” and a photon is usually denoted by the symbol γ (gamma). The photons are also symbolized by hν (in chemistry and optical engineering), where h is Planck’s constant and the Greek letter ν (nu) is the photon’s frequency. The radiation frequency is key parameter of all photons, because it determines the energy of a photon. Photons are categorized according to the energies from low-energy radio waves and infrared radiation, through visible light, to high-energy X-rays and gamma rays.
Photons are gauge bosons for electromagnetism, having no electric charge or rest mass and one unit of spin. Common to all photons is the speed of light, the universal constant of physics. In empty space, the photon moves at c (the speed of light – 299 792 458 metres per second).
Momentum of Photon
A photon, the quantum of electromagnetic radiation, is an elementary particle, which is the force carrier of the electromagnetic force. The modern photon concept was developed (1905) by Albert Einstein to explain of the photoelectric effect, in which he proposed the existence of discrete energy packets during the transmission of light.
In 1916, Einstein extended his concept of light quanta (photons) by proposing that a quantum of light has linear momentum. Although a photon is massless, it has momentum, which is related to its energy E, frequency f, and wavelength by:
Thus, when a photon interacts with another object, energy and momentum are transferred, as if there were a collision between the photon and matter in the classical sense.
Momentum of a Photon – Compton Scattering
The Compton formula was published in 1923 in the Physical Review. Compton explained that the X-ray shift is caused by particle-like momentum of photons. Compton scattering formula is the mathematical relationship between the shift in wavelength and the scattering angle of the X-rays. In the case of Compton scattering the photon of frequency f collides with an electron at rest. Upon collision, the photon bounces off electron, giving up some of its initial energy (given by Planck’s formula E=hf), While the electron gains momentum (mass x velocity), the photon cannot lower its velocity. As a result of momentum conservation law, the photon must lower its momentum given by:
So the decrease in photon’s momentum must be translated into decrease in frequency (increase in wavelength Δλ = λ’ – λ). The shift of the wavelength increased with scattering angle according to the Compton formula:
where λ is the initial wavelength of photon λ’ is the wavelength after scattering, h is the Planck constant = 6.626 x 10-34 J.s, me is the electron rest mass (0.511 MeV)c is the speed of light Θ is the scattering angle. The minimum change in wavelength (λ′ − λ) for the photon occurs when Θ = 0° (cos(Θ)=1) and is at least zero. The maximum change in wavelength (λ′ − λ) for the photon occurs when Θ = 180° (cos(Θ)=-1). In this case the photon transfers to the electron as much momentum as possible. The maximum change in wavelength can be derived from Compton formula:
The quantity h/mec is known as the Compton wavelength of the electron and is equal to 2.43×10−12 m.
We hope, this article, Photon – Fundamental Particle, helps you. If so, give us a like in the sidebar. Main purpose of this website is to help the public to learn some interesting and important information about radiation and dosimeters.
|
Cancer is one of the world’s biggest killers. But recent research suggests that simple lifestyle changes, like following a healthy diet, could avoid 30–50% of all cancers. Increasing evidence points to some dietary practices increasing or decreasing the risk of cancer.
Therefore, nutrition is supposed to play a significant role in treating and coping with cancer.
Eating too much of certain foods may increase cancer risk
It is much difficult to prove that certain foods cause cancer. However, it has been observed that a high intake of certain foods may increase the probability of developing cancer.
Sugar and refined carbs
Processed foods which are low in fiber and high in sugar and nutrients have been associated with a higher cancer risk. Therefore, a diet causing blood glucose levels to spike can increase the risk of several cancers, such as breast, stomach, and colorectal cancers.
Also, higher levels of blood glucose and insulin can cause inflammation which can ultimately lead to cancer. That is why people with diabetes — a condition with high blood glucose and insulin — have an increased risk of developing cancer. For instance, the risk of colorectal cancer is 122% higher if you are diabetic. Therefore, to protect against cancer, avoid foods which boost insulin levels, like foods high in refined carbs and sugar.
The International Agency for Research on Cancer (IARC) believes that processed meat is carcinogenic, something which causes cancer.
Processed meat is the meat treated to preserve flavor by undergoing salting, curing or smoking. It includes ham, hot dogs, salami, bacon, and some deli meats. Strong evidence proposes that eating a large amount of processed and red meat can increase the risk of bowel cancer, pancreatic and stomach cancer.
A review of above 800 studies observed that consuming only 50 grams of processed meat each day, raised the risk of colorectal cancer by 18%. Red meat consumption can also increase cancer risk. But fresh white meat (like chicken) and fish are not associated with an increased risk of cancer. Some reviews which combined consequences from many studies found that the evidence linking unprocessed red meat to cancer is inconsistent and weak.
Cooking certain foods at high temperatures, like frying, grilling, broiling and barbequing, can yield harmful compounds such as heterocyclic amines (HA) and advanced glycation end-products (AGEs). Excess of these compounds can cause inflammation and the development of cancer and other diseases.
Certain foods, like animal foods high in protein and fat and highly processed foods, mostly produce these harmful compounds at high temperatures. These include meat — mostly red meat — butter, certain cheeses, fried eggs, cream cheese, mayonnaise, nuts, and oils.
To decrease cancer risk, avoid burning food. Choose gentler cooking methods, especially while cooking meat, like steaming, stewing or boiling.
Many observational studies have shown that high dairy ingestion may increase the risk of prostate cancer. One study surveyed nearly 4,000 men with prostate cancer. Results showed that high intakes of whole milk increased the risk of disease development and death.
Theories show that these may be due to an increased calcium intake, insulin-like growth factor 1 (IGF-1) or estrogen hormones from pregnant cows. But actually, they all are not strongly linked to prostate cancer.
Being overweight or obese is linked to increased cancer risk
Obesity is the biggest risk factor for cancer worldwide. It also increases your risk of 13 different types of cancer, like colon, esophagus, pancreas, and kidney, as well as breast cancer after menopause.
Obesity can increase cancer risk in three key ways:
• Excess body fat contributes to insulin resistance. Thus, your cells can’t take glucose properly, which boosts them to multiply faster.
• Obese people have higher levels of inflammatory cytokines in their blood, causing inflammation and encourages cell division.
• Fat cells increase the estrogen levels, which raises the risk of ovarian and breast cancer in postmenopausal women.
But fortunately, several studies have shown that weight loss among obese people is likely to reduce cancer risk.
Certain foods contain cancer-fighting properties
Scientists evaluate that eating the optimum diet for cancer may reduce risk by up to 70%. They consider that certain foods can combat cancer by blocking blood vessels which nourish cancer in a method called anti-angiogenesis.
However, nutrition is complex, and how certain nutrients fight cancer depends on how they are cultivated, processed, and cooked.
Some of the important anti-cancer food groups include;
Higher consumption of vegetables can cause a lower risk of cancer. Many vegetables comprise cancer-fighting phytochemicals and antioxidants.
For example, cruciferous vegetables, including broccoli and cabbage, contain sulforaphane, which reduces tumor size in mice by more than 50%. Other vegetables, like tomatoes and carrots, can decrease the risk of lung, prostate, and stomach cancer.
Fruits also contain antioxidants and phytochemicals, which may help prevent cancer. One review showed that three servings of citrus fruits per week lowered stomach cancer risk by 28%.
Flaxseeds have protective effects against certain cancers and may even reduce the spread of cancer cells.
Some studies have found that cinnamon may have anti-cancer properties. It prevents cancer cells from spreading. Additionally, curcumin, present in turmeric, may help combat cancer.
Beans and legumes
They are high in fiber. Higher intake of this nutrient may defend against colorectal cancer. One study above 3,500 people found that eating the most legumes can lower the risk of certain cancers by 50%.
Frequently eating nuts may be associated with a lower risk of certain types of cancer. For instance, one study in over 19,000 people found that people who ate more nuts had a reduced risk of dying from cancer.
Several studies show a connection between olive oil and reduced cancer risk. One large review found that people who consumed the highest amount of olive oil had a 42% lower risk of cancer.
It contains allicin, which has cancer-fighting properties in test-tube studies. Other studies have found a link between garlic intake and a lower risk of cancer, including prostate and stomach cancer.
There is evidence that eating fresh fish can help protect against cancer, possibly because it can reduce inflammation. A review of 41 studies found that commonly eating fish reduced the colorectal cancer risk by 12%.
The major evidence suggests that eating certain dairy products may decrease the risk of colorectal cancer. The form and amount of dairy consumed are also significant.
For instance, moderate intake of high-quality dairy products, like raw milk, milk from grass-fed cows, and fermented milk products may have a protective effect. But high consumption of processed dairy products is associated with an increased cancer risk.
Plant-based diets may help protect against cancer
Plant-based foods have been linked with a reduced cancer risk. Studies have found that persons who follow a vegetarian diet have a reduced risk of developing cancer. In fact, a review of 96 studies found that vegetarians and vegans may have an 8% and 15% lower cancer risk, respectively.
The right diet can have positive effects on people with cancer
Muscle loss and malnutrition are common in people with cancer and have a negative effect on their health and survival. Optimal nutrition can help prevent malnutrition and enhance the quality of life in people with cancer. A balanced diet with adequate protein and calories is best.
Ketogenic diet shows some potential for treating cancer, but the evidence is weak
The early research proposes that a ketogenic diet may lower tumor growth. It improves the quality of life without adverse side effects. A ketogenic diet lowers insulin and blood sugar levels, possibly causing cancer cells to starve. In fact, research has shown that it can reduce tumor growth and improve survival rates in animal and test-tube studies. However, further research is still needed.
The bottom line
Although there are no miracle foods which can prevent cancer, some evidence proposes that dietary habits can provide protection. Generally, people with cancer are stimulated to follow a balanced, healthy diet to improve the quality of life and support optimal health outcome.
|
Linda Crampton is a writer and teacher with a first-class honors degree in biology. She often writes about the scientific basis of disease.
Streptococcus is a common genus of bacteria in and on our bodies. Some members of the genus are harmless, but others are responsible for problems such as tooth decay, pneumonia, strep throat, and necrotizing fasciitis (flesh-eating disease). At least one type is beneficial, however. It's used to create fermented foods like certain yogurts and cheeses.
Genus names are normally capitalized. Streptococcus is such a common genus that the uncapitalized terms "streptococcus" (singular) and "streptococci" (plural) are often used to refer to the members of the genus. The genus is divided into a number of groups. Two groups that are important with respect to human health are Group A and Group B. I discuss the following topics related to these groups in this article.
- The Lancefield classification system for bacteria
- Features of streptococcus cells
- Group A streptococcus (or GAS) facts
- Five diseases caused by GAS
- Group B streptococcus (or GBS) facts
- Adult and infant problems caused by GBS
Streptococcus pyogenes and Streptococcus mutans are scientific names. The scientific name of an organism consists of two words—the genus and the species. The first word in the name is the genus and the second word is the species. Like S. pyogenes, S. mutans cells often join to form chains.
The Lancefield Classification System
Bacteria are amazing organisms with many different features. Scientists have tried to bring order to their classification. The Lancefield system for classifying a particular category of bacteria was created by a microbiologist named Rebecca Lancefield (1891 to 1985). The bacteria that she studied belonged to the family Streptococcaceae and were Gram-positive and catalase-negative.
The results of the Gram staining test depend on the structure of the outer covering of a bacterial cell. Gram-positive cells and Gram-negative ones display a different color when they are tested with the stain. The term "Gram" is derived from the name of Hans Christian Gram, who created the staining procedure. Catalase is an enzyme that converts hydrogen peroxide to water and oxygen. It's found in many organisms that use oxygen. Streptococci lack the enzyme, however.
Lancefield divided the bacteria into alphabetical groups going from Group A to Group S. Her system is not often used in its original form today, but the Group A and B categories are still used. As in the case of the name of the bacterium’s genus, the word "group" in the terms “Group A” and “Group B” is sometimes capitalized but often isn't.
Groups and Features of Streptococcus Cells
The various strains of group A streptococcus (GAS) all belong to one species—Streptococcus pyogenes. Strains are slightly different members of a species. As in the case of group A streptococci, the different strains of group B streptococcus (GBS) all belong to one species—in this case, Streptococcus agalactiae.
Streptococci have round cells, which are often attached to each other to form pairs or chains. They are known as lactic acid bacteria because they feed on carbohydrates and obtain energy by converting the carbohydrates into lactic acid. They don’t need oxygen to survive. Some can use oxygen if it's available but can also live without it; some don't use oxygen but can tolerate its presence; and some are inhibited by oxygen.
Group A Streptococcus (GAS) Facts
Streptococci in group A live on our skin and in our throats and usually cause no problems. Occasionally they make us ill, however. The illnesses are generally relatively mild, such as strep throat, impetigo, or scarlet fever, but they may be more serious, like rheumatic fever.
Rarely, the bacteria become invasive and penetrate further into the body, as occurs in necrotizing fasciitis. An invasive GAS infection can be very dangerous. In general, the people who develop invasive infections have a chronic illness or are elderly, but this isn't always the case.
Strep throat is also known as streptococcal pharyngitis. The disorder generally occurs in children and young teenagers. It's spread by drops of saliva or nose fluid transferred from an infected person. This transfer is most likely to happen in a crowded environment.
Symptoms of a strep throat may include a red, swollen, and painful throat and white patches on the tonsils. Lymph nodes may also be swollen. In addition, the sufferer may experience a fever, a headache, nausea, vomiting, or stomach pain. Strep throat is often treated by antibiotics in order to prevent the bacteria from travelling deeper into the body and causing a more serious illness.
Not every sore throat is caused by streptococcus. A sore throat caused by a virus won't respond to antibiotic treatment. A test called a throat swab is often performed to confirm the presence of a streptococcus bacterium.
Scarlet Fever Disease
An untreated case of strep throat may lead to scarlet fever. Scarlet fever is generally not the serious disease that it once was, but it's still an unpleasant illness. The streptococcus bacteria responsible for strep throat produce a toxin. In some people, the toxin causes a bright red rash on the skin. The rash generally appears on the face and neck first and then spreads to other parts of the body. Red streaks may form in skin creases.
In addition to a rash and a sore throat, someone with scarlet fever may have swollen neck glands, a fever, body aches, nausea, and vomiting. Antibiotics are often used to treat the disorder.
A Disease Comeback
In recent years, researchers have noticed that scarlet fever appears to be making a comeback. At the moment, this observation hasn't been explained, but at least one theory has appeared. Researchers at the University of Queensland in Australia have been studying the situation in cooperation with scientists from other countries. They believe that the increased occurrence of scarlet fever has been caused by a viral infection of the bacteria. They are afraid that the bacteria are becoming “stronger” as well as more common.
Viruses contain genetic material, but they don't consist of cells and can't reproduce on their own. They must enter a cell in order to use its equipment for generating new virus particles. The researchers believe that a virus has left genes behind in scarlet fever bacteria that enable the bacteria to make novel toxins. As the microbes with the new genes reproduce and pass copies of their genes to their offspring, the disease that they cause becomes more serious.
Though the researchers’ idea is still a theory, the fact that scientists from multiple institutions are contributing to the research suggests that it should be taken seriously. At the moment, though the disease is becoming more common, the researchers haven’t noticed fatalities due to the infection. Antibiotics are still helping. The potential development of antibiotic resistance in the bacteria is a concern, however, as it is in all bacterial infections.
Rheumatic fever is a potentially serious disorder that may be a complication of a strep throat or scarlet fever infection. The illness involves widespread inflammation that may occur in several parts of the body, including the joints, heart, and nervous system. Rheumatic fever generally occurs in children and teenagers, but it sometimes develops in adults. The disorder generally appears two to three weeks after the initial streptococcus infection.
Symptoms of rheumatic fever may include joint pain and swelling, a fever, a rash, nodules under the skin, stomach pain, nosebleeds, chest pain, shortness of breath due to an inflamed heart, and jerky movements. There may be permanent damage to the heart valves. Treatment often involves antibiotics and anti-inflammatory medications. Adults who develop rheumatic fever may find that they experience recurring episodes of the illness.
Impetigo is a common and easily spread skin infection in children. It also occurs in adults. The disorder is caused by group A streptococci as well as some other types of streptococcus. It's characterized by the appearance of blisters or red patches on the skin, especially around the nose and mouth. The blisters may also appear on the neck, hands, forearms, and diaper area.
Impetigo is spread by body contact with an infected area on someone's skin or by touching items that have rubbed against the blisters, such as toys or towels. Doctors often treat the disease with a topical antibiotic, which is placed on the blisters, or with an oral antibiotic.
Necrosis is the death of body tissue. A fascia is a sheath of connective tissue that surrounds muscles. In necrotizing fasciitis (pronounced "fasheitis"), fasciae are inflamed and destroyed due to a streptococcus infection. Skin and the fat under the skin may be destroyed as well as the fasciae and muscles.
Necrotizing fasciitis is rare but potentially very serious. It's sometimes known as the flesh-eating bacteria disease. The disease may involve other types of bacteria as well as or instead of streptococcus. The chance of developing necrotizing fasciitis increases if a person has a skin wound when they are exposed to bacteria that can cause the disease. A weakened immune system or a chronic disease such as diabetes, kidney disease, liver disease, or cancer may also allow necrotizing fasciitis to develop.
Symptoms of necrotizing fasciitis include a wound that becomes very painful, red, hot, and swollen. The tissue eventually turns purple or black if the infection isn't treated. The patient may also experience a fever, chills, nausea, vomiting, and diarrhea. He or she may go into shock and have organ failure.
Possible Treatment for Necrotizing Fasciitis
Necrotizing fasciitis progresses rapidly and requires early and aggressive treatment. Antibiotics are generally given to kill bacteria. Surgery is sometimes needed to remove infected and dead tissue. Sometimes limbs need to be amputated. Extra treatments will be required if a person is in shock or has organ damage. Hyperbaric oxygen therapy is helpful in some cases of necrotizing fasciitis. In this therapy, oxygen is forced into the patient's tissues under high pressure.
Although necrotizing fasciitis can be life threatening, it can be treated successfully. One of my acquaintances (who was in his twenties and healthy at the time) developed the disease after a skin wound on his arm. He required antibiotics and hyperbaric oxygen therapy to treat the infection as well as multiple surgeries to remove infected and dead tissue. Once he recovered from the infection, he received plastic surgery on his arm. Although complete recovery took a long time, he is now able to play the guitar again, which is one of his favourite activities.
Most cases of necrotizing fasciitis are caused by streptococcus. In Aimee Copeland's case, however, the causative agent was a bacterium named Aeromonas hydrophila. Aimee survived the infection, but she required amputations in order to do so. She lost both hands, one leg, and one foot. She also experienced multiple organ failure during the infection. Her story is told below.
Group B Streptococcus (GBS) Facts
In many people, group B streptococci are a normal component of the bacterial population in the large intestine. The bacteria may also live in the reproductive tract and the urinary tract. They generally produce no symptoms in healthy people. Unfortunately, they may cause disease in elderly people or in ones who have health problems such as diabetes, cancer, liver disease, or kidney disease. They may also cause a problem in newborn babies. Like group A bacteria, group b ones sometimes become invasive.
Group B Strep Infection in Adults
People aged 65 or older or people with certain chronic diseases are most likely to develop symptoms of a GBS infection. Infected people may develop skin problems or a urinary tract infection. More seriously, they may develop pneumonia, a blood infection (sepsis), a bone infection, inflamed heart valves, or meningitis. Meningitis is a disorder in which the membranes around the brain become inflamed.
It's important that people with any of the following symptoms visit a doctor for a diagnosis and treatment. A relatively mild infection that is untreated may become more serious. Symptoms of GBS disease may include:
- inflamed bumps on the skin
- symptoms of a urinary tract infection, such as a burning sensation when urinating and excessive urination
- difficulty breathing
- rapid breathing
- chest pain
- stiff joints
Group B Streptococcus and Newborn Babies
If a woman with group B streptococci in her reproductive tract becomes pregnant, her baby may become colonized with the bacteria during birth. This colonization may cause no ill effects. In some cases, however (if no treatment is provided), the baby may develop a serious disease, such as pneumonia, meningitis, or blood infections, all of which may be life-threatening. Premature babies are more susceptible to infection than full-term babies.
Modern prevention and treatment programs have greatly reduced the problem of a GBS infection in newborns. Woman are often tested for the presence of a group B streptococcus before their baby is born. If the bacteria are present, intravenous antibiotics may be given during labour. Doctors generally don't give the mother antibiotics any earlier since the bacteria may regrow before the baby is born. The baby is tested for the presence of the bacteria after birth and treated if necessary.
A GBS infection transmitted to a baby during birth and producing symptoms during the first week of its life is known as early-onset group B strep disease. Some babies develop an infection between one week and three months after birth, however. This infection is known as late-onset disease and is not well understood. Unfortunately, it can't yet be prevented, but it can be treated.
Interesting and Troublesome Bacteria
Streptococci are interesting but sometimes troublesome bacteria that may have major effects on our lives. Although the ability of groups A and B streptococci to cause multiple health problems is fascinating biologically, the problems can sometimes be serious or even life threatening. Hopefully, we will soon find more effective ways to prevent and treat streptococcus infections of any type.
- “Group A Streptococcal Infections” from HealthLinkBC (a government of British Columbia organization)
- Strep throat information from the Mayo Clinic
- Facts about scarlet fever from the CDC (Centers for Disease Control and Infection)
- Scarlet fever is making a comeback from ABC (Australian Broadcasting Corporation) news
- Information about rheumatic fever from the Mayo Clinic
- Impetigo facts from the NHS (National Health Service)
- Information about necrotizing fasciitis from WebMD
- “Group B Strep“ description from the CDC
- Group B Strep infections in babies from WebMD
This content is accurate and true to the best of the author’s knowledge and does not substitute for diagnosis, prognosis, treatment, prescription, and/or dietary advice from a licensed health professional. Drugs, supplements, and natural remedies may have dangerous side effects. If pregnant or nursing, consult with a qualified provider on an individual basis. Seek immediate help if you are experiencing a medical emergency.
© 2011 Linda Crampton
Linda Crampton (author) from British Columbia, Canada on September 20, 2011:
Thanks for a very interesting comment, Seeker7, and for the vote as well! Antibiotics are an interesting topic to explore.
Helen Murphy Howell from Fife, Scotland on September 20, 2011:
A fascinating hub about the Streps! I know they can be dangerous but they are so interesting as well. It was also interesting to hear about scarlet fever and rheumatic fever. I remember my Mum, many years ago, telling me about people - especially children - that she had looked after. In those days when 'the fevers' were very dangerous nurses could train to be a Fever Nurse and then go on to do general training, which is what Mum did. Some of her stories were scary but fascinating as well.
I liked how you mention about anti-biotics!! I don't know how many times I've had to tell people not to throw their anti-biotics out but to finish the course. This is not even patients, but my own family! As soon as folks feel better they think it's okay just to dump the rest of their medication down the loo! Then they complain either because they think that the anti-biotics haven't worked or because they need to make another trip to the doctor??!!
I really enjoyed this hub - very interesting indeed! Voted up.
Linda Crampton (author) from British Columbia, Canada on September 11, 2011:
Thank you very much, Prasetio! I appreciate the comment and the vote.
prasetio30 from malang-indonesia on September 11, 2011:
Nice hub and I thought we should know about this information. I really enjoy your explanation about Streptococcus bacterial and all the videos above. You have done a great job. Vote up!
Linda Crampton (author) from British Columbia, Canada on September 07, 2011:
Thanks a lot for the comment and the votes, Tina. Yes, bacteria can be our friends or our enemies!
Christina Lornemark from Sweden on September 07, 2011:
A very interesting hub and you have done a great job writing this in a way that is easy to read. Great videos to. Bacteria are important but can also cause trouble! Voted up, interesting
Linda Crampton (author) from British Columbia, Canada on August 31, 2011:
Oh my goodness, Susan! That must have been a horrible experience for you. I’m so glad that you recovered. A former student of mine developed necrotizing fasciitis in his hand last year from a Staphylococcus infection. At one point the doctors though that they would have to perform an amputation, which is traumatic for anyone, but was also very depressing for my student because he loves to play the guitar. Luckily he recovered and is still able to play the guitar.
Susan Zutautas from Ontario, Canada on August 31, 2011:
Very informative hub. I have a hub which is a short story written on Necrotizing Fasciitis as I had this in my arm.
Linda Crampton (author) from British Columbia, Canada on August 30, 2011:
Thanks a lot for the comment, Danette. Streptococcus is certainly a versatile bacterium! Yes, impetigo is usually caused by the same bacterium that causes a strep throat, but sometimes it's caused by a different bacterium called Staphylococcus. Either way, it’s not a very nice condition!
Danette Watt from Illinois on August 30, 2011:
Hey Alicia, I always enjoy your science-y hubs - lots of good info and interesting topics. I didn't realize impetigo was from the same strep bacteria as strep throat. I remember my younger son having that as an infant. Voted up and interesting
|
Warm front facts for kids
A warm front is a leading edge of a warmer air mass that is advancing into a cooler air mass.
Warm fronts usually have stratus and cirrus clouds, but sometimes they also have cumulus and cumulonimbus clouds. Before the warm front passes, there can be rain or snow. While it is passing, there is often light rain or drizzle.
Warm fronts move more slowly than cold fronts.
Images for kids
Warm front Facts for Kids. Kiddle Encyclopedia.
|
The Role of Metacognition in Learning and Achievement
Excerpted from “Four-Dimensional Education: The Competencies Learners Need to Succeed,” by Charles Fadel, Bernie Trilling and Maya Bialik. The following is from the section, “Metacognition—Reflecting on Learning Goals, Strategies, and Results.”
Metacognition, simply put, is the process of thinking about thinking. It is important in every aspect of school and life, since it involves self-reflection on one’s current position, future goals, potential actions and strategies, and results. At its core, it is a basic survival strategy, and has been shown to be present even in rats.
Perhaps the most important reason for developing metacognition is that it can improve the application of knowledge, skills, and character qualities in realms beyond the immediate context in which they were learned. This can result in the transfer of competencies across disciplines—important for students preparing for real-life situations where clear-cut divisions of disciplines fall away and one must select competencies from the entire gamut of their experience to effectively apply them to the challenges at hand. Even within academic settings, it is valuable—and often necessary—to apply principles and methods across disciplinary lines.
Read it all at the link below:
|
Indian Consitution Questions for IBPS PO, Clerk, Competitive Exams
Constitution of India is a topmost topic of IBPS General knowledge syllabus. Various Indian constitution based questions asked every year in all types of exams such as banking, UPSC, SSC etc. If you have good knowledge about this topic you can easily answer any of question asked from this.
Here IBPS Recruitment Guide helps you better understand about constitution questions topic in which you find it’s history, features, kinds, important topics, notes, example questions etc.
While reading this information be focused because every paragraph of this article creates various constitution questions multiple choice. Every topic covers up to 3 – 5 questions. here we only cover important topics of the constitution for IBPS Exam.
This is a best practice to focus on the whole topic rather than only for a question. Examiner will ask the question from anywhere so you should prepare a whole about it. here you can check more about IBPS General knowledge preparation tips.
Constitution Questions for IBPS Exam
Here every paragraph contains different Indian constitution gk questions so be serious while reading.
What is the Constitution?
The constitution is a body of fundamental principals or established precedents according to a state or another many organization is acknowledge to be governed. It is the composition of something that helps the Indian states or organization to run. It is a body of different entrenched rules that make it and helps in governing the conduct of organization, nation, and state and established its concept, structure or character.
It is in the form of a short document that is general in nature and embodying the aspiration and values of its writers and subjects. The oldest constitution was of the US that was in 1787.
There are many rules or laws that pertain to a society when our forefathers wrote the constitution they formed a new and wonderful experiment in government which spelled out the equality of all men and allowed flexibility for changes over time.
Why is the constitution important?
The constitution is useful for organization of any size, such as community group which may choose to organize their meetings through Robert’s rules of order.
When principal is written into a single document or a set of legal document those document can be said to embody a written constitution. If they are written down in a single comprehensive document it is said to embody a codified constitution.
It concerns different levels, from sovereign states to companies and unincorporated association. Some constitutions are codified, that act as limiters on state power, by establishing lines which a state ruler cannot cross, such as fundamental rights.
What is India’s constitution?
India’s constitution is longest written, over any sovereign country. It has 444 articles, 22 parts, 12 scheduled, and 118 amendments with 117,369 words in English language translation, where a US-written constitution is shortest written constitutions that have 7 articles and 27 amendments with 4400 words.
The constitution of India helps in sets out how that state will be organized and the powers of authorities of government between different political units and citizens.this para creates various constitution questions for bank exams such as
– How many articles in Indian constitution?
– Which is the longest written constitution in the world?
Birth of the constitution in India
India’s constitution was passed on 26 Nov 1949, by the constituent assembly and applicable on 26 Jan 1950. The constituent assembly was elected for undivided India and held its first sitting on 9th dec. 1947.
In composition, the members were elected by indirect election by the members of provisional legislative assemblies at the time of signing 284 out of 299 members. The constitution of India drew from western legal tradition, it follows British parliamentary pattern. It embodies fundamental rights that were stated from the US it also borrow the concept of Supreme Court.
India is the federal system in which residual power of legislation remains with the central government, similar to Canada. It provides a detailed list dividing up power between central and state government as in Australia. The Indian constitution is one of the most frequently amended constitutions in the world.
IBPS Constitution Questions based on topics
8 facts about the constitution of India:
Question – What are facts of Indian constitution?
- The Indian constitution is the largest in the world.
- It took 3 years to draft the Indian constitution.
- The Indian constitution was handwritten and calligraphed both in English and Hindi.
- The Indian constitution is called the bag of borrowings.
- The Indian constitution came into Jan 26, 1950.
- Our republic day is celebrated for 3 days.
- The national emblem of India was adopted on Jan 26, 1950.
- B.R. Ambedkar had a major role to play in the formulation of the Indian constitution.
Question – Who wrote the Indian constitution?
22 parts of Indian constitution –
- Part 1 the union and its territory art 1 to 4
- Part2 citizenship art 5 to 11
- Part 3 fundamental rights art 12 10 35
- Part4 directive principal art 36 to 51
- Part4 A fundamental duties art 51 A
- Part 5 the union art 52 to 151
- Part6 the states art 152 to 237
- Part7 related by const.
- Part8 union territories art 239 to 242
- Part9 the panchayat art 243 to 2430
- Part9A the municipalities art 243 p to 243 zg
- Part9B cooperative society’s art 243 zh to 243 zt
- Part10 the schedule and tribunal areas art 244 to 244 A
- Part11 relation between union and states art 245 to 263
- Part12 finance, property, contract art 264 to 300 A
- Part13 trade, commerce, intercourse within Indian Territory art 301 to 307
- Part14 services under unions and states Art 308 to 323
- Part14 A tribunals art 323 A to 323b
- Part15 election
- Part16 special provision art 330 to 342
- Part17 official language art 343 to 351
- Part18 emergency provisions art 352 to 360
- Part19 miscellaneous art 361 to 367
- Part20 amendment of constitution art 368
- Part21 temporary, transitional and special provision art 369 to 392
- Part22 short title, commencement, authority text in Hindi and English art 393 to 395
Important article of Indian constitutions:
Article 12-35 specify the fundamental rights
Article 36-51 specify directive principle of states policy
Article 51 A specify fundamental duties of every citizen
Article 80 number of seats for Rajya sabha
Article 81 number of seats of Lok sabha
Article 343 Hindi as an official language
Article 356 imposition of president rules in states
Article 368 amendment of the constitution
Article 370 special status to Kashmir
Article 395 repeals Indian independence act gov of India act 1935.
12 schedules of Indian constitution:
Question – How many Schedules in the constitution?
- Sch 1: contain list of union and territories and their territories
- Sch2: contain provision as to president, Governors of status, speaker and deputy speaker of the house of the people and the chairman and the deputy chairman of the council of states and the speaker and the deputy speaker of the legislative assembly and the chairman and the deputy chairman of the legislative council of a state, the judge of the supreme court and of the high court and the controller and the auditors- general of India.
- Sch3: contains forum of oaths and affirmation.
- Sch 4: contain provisions as to the allocation of seats in the council.
- Sch5: contains provisions as to the administration and control of scheduled areas and scheduled tribes
- Sch 6: contain provisions as to the administration in tribal areas in the states of Assam, Meghalaya, Tripura, and Mizoram.
- Sch7: contain union list states list and concurrent list.
- Sch8: contain the list of recognized language.
- Sch9: contain the provision as to validation of certain rules and regulations.
- Sch10: contain the provision as to disqualification on defection.
- Sch11: contain the power, authority, and responsibility, of panchayats.
- Sch12: contain the power, authority and responsibility, of municipalities.
Hope this article on Constitution Questions for IBPS Recruitment helps you a lot to understand the topic. This article also required various topics more we will update it soon. You can also buy some Indian constitution books for more depth details on it. If you want to share something with us on basic constitution questions and also share your experience of ibps exam you can use our comment section given below.
|Latest IBPS Bank Exam Updates www.ibps.in
|IBPS PO 2019||IBPS SO 2019|
|IBPS Clerk 2019||IBPS Exam Pattern|
|IBPS 2019 Syllabus||IBPS Preparation Books|
|IBPS RRB 2019||IBPS Preparation Tips|
|
Bad moon rising: Astronomers explain "full moon curse"
The full moon has long been associated with any number of superstitions. While links with lunacy, violence, fertility, disasters, and the stock market have been thoroughly debunked, the possibility of a causative role in some arenas still remains a possibility. A lunar ranging study carried out using reflectors has long contended with the "Full-Moon Curse", a near-total fading of reflected signals during the full Moon. This Curse is real, and has now been explained.
Lunar ranges have been collected for decades, most recently by astronomer Tom Murphy's group at UC San Diego, who is using these data to carry out a stringent test of general relativity. Lunar ranging works by sending laser pulses from the Earth to the Moon and back, timing the duration of the round trip.
The observations are now carried out using a 3.5 m (140 in) telescope at New Mexico's Apache Point Observatory. Twenty 532 nm laser pulses, each having an energy of 115 mJ and a duration of 100 ps, are directed each second through the telescope onto the lunar surface. The pulses then strike retroreflectors left on the Moon by Apollo astronauts and on a Soviet lunar rover. These optical prisms precisely direct reflected light back along the path on which it arrived; that is, back to the original telescope where their arrival is timed.
The time required for the round trip can be measured to within a few picoseconds, allowing the distance to the Moon to be measured with a precision of about a millimeter (0.04 in). The level of precision is remarkable, considering that only one photon, on average, is detected in each return pulse – roughly 1 in 100 quadrillion photons sent from the telescope to the Moon.
In short, a very difficult experiment is being carried out brilliantly. Then appeared the Full-Moon Curse. On the night of the full Moon, the strength of the returned signal drops roughly tenfold. Not just once; the drop follows the lunar phases month after month.
Photons don't just disappear. To avoid detection, they must either be absorbed or misdirected. Moon dust is an obvious source for absorbing material, and indeed the prism surface appears to be about 50 percent blocked by dust. However, its effect on the photon return signal is essentially constant, as there appears to be no reason why dust should accumulate every full Moon, just to disappear the next day.
The actual reason turns out to be a bit more subtle, and depends on a tiny design detail that no one considered. As shown in the image above of a lunar retroreflector on the Moon's surface, the prisms are inset rather deeply into the retroreflector's surface. This means that the Sun's rays only strike the surface of the prisms when the Sun is high in the lunar sky – when the Moon appears fully illuminated from Earth.
Lunar dust, on average, is rather dark, absorbing about 93 percent of the light that hits it. When the Moon is full, the Sun's light hits the dust on the surface of the prisms, which does not occur on other days because of the insetting of the prisms. As a result, the dust, and the very front layers of the retroreflector prisms, heat up on the day of the full Moon.
In response to the temperature gradient across the prisms, both the shape and the homogeneity of the glass from which the prisms are made are compromised. This thermal response has rather the same effect as placing a very weak lens on the front of the retroreflectors.
This lensing effect defocuses the otherwise parallel light that emerges from the retroreflectors, making the return spot size larger on the Earth's surface. The telescope, which is tiny compared to the area over which the laser photons are returned to the Earth, therefore sees a smaller proportion of those photons. This decrease is expected only during full Moon.
It's a nice idea, but rather difficult to prove without going to the Moon's surface and making measurements. Then Prof. Murphy had an idea. If the Sun's heat was distorting the retroreflectors at full Moon, all he had to arrange to test that was to turn off the Sun, and see if the signal comes back.
Although turning off the Sun is a bit tricky, it happens naturally on the Moon's surface when we see a lunar eclipse. At that point, the Earth blocks the Sun's light from striking the Moon. Naturally, all lunar eclipses occur at full Moon.
Murphy's team eventually had good conditions for lunar ranging during a lunar eclipse. During the five and a half hour duration of the eclipse (a lunar eclipse is far longer than a solar eclipse, essentially because the Earth is larger than the Moon), the team reflected pulses from all the lunar retroreflectors. Before the eclipse, they saw the usual tenfold reduction in signal. When the progress of the eclipse put each retroreflector into the dark in turn, the full return signal returned. Finally, when the Sun again shone on the lunar surface, the signals fell to their usual full Moon level.
While these observations remain something short of complete proof of the mechanism, it does appear to confirm the effect is driven by the Sun's heating of the lunar apparatus. While the lunar ranging Full-Moon Curse is now put to bed, it is an important reminder that just because something happens on a apparently supernatural schedule doesn't mean it can't be real.
Source: UC-San Diego
|
From Fort Wilkins, near Copper Harbor, MI, to Fort Howard in Green Bay, WI runs the Old Military Road. This road was built to connect the two forts during the Civil War for rapid reinforcements. Although used to connect the two forts, a trail used to reside in the same spot used for Indians and fur traders. This road also had many other uses such as connecting the northern timber resources with the rest of the world. This road played a large role in connecting the Upper Peninsula with the rest of the country at the time which later will play a major role in industrializing the mining operations in the Upper Peninsula.
In 1863 the United States made a land grant to the states of Michigan and Wisconsin to build a military wagon road from fort to fort so that supplies, ammunition and mail could be transported from Green Bay to Lake Superior in case the passage around was cut off by an enemy in a future war . Around this time the United States was expanding and placing forts in key areas such as Fort Wilkins and Fort Howard. Some may question how this road was paid for at that time. The road was actually paid for in timber lands from the government, three sections for every mile of road .
The road was completed in 1872 but wasn’t used as a highway for very long because two railroads, the Chicago and Northwestern and the Wisconsin Central were built in 1878 and reached Ashland, WI . The main concern at the time of construction was that it was necessary for the military in case of a war with another great power such as Great Britain and also to protect the Portage Lake, which was the center of the copper district at that time . The United States, at the time, was largely concerned about the copper mining in the area and the easy access of invasion of other countries from Lake Superior.
Although the main mission was to build a road from Fort Howard to Fort Wilkins, the road was also important for resources such as iron and copper in the area. As reported in the Chicago Tribune in 1862, “Marquette is the centre of one of the most extensive iron regions in the world” . The resources in the area were obviously vital to the growth of the United States and industries in the upper Midwest. The military road connecting these two forts weren’t the first in the state. In fact, military roads in Michigan began in 1813 when General Lewis Cass sought to enlist the support of Congress to begin building roads for military defenses in Detroit . The importance of the military and its resources back then were quite imminent as the country was growing into a large military power. In order to become one of the top military powers in the world, the country would have to expand, which can be seen in the construction of these military roads. The military road from Fort Howard to Fort Wilkins was significant to the area for travel and also proved that there were valuable resources in Northern Michigan that were important for the expansion of the country and its military.
The Civil War
The Civil War broke out between the North and the South in April of 1861 and ended in May of 1865. The Civil War erupted because of many reasons but the most agreed upon reason would be because of the southern states wanting to keep slavery. Abraham Lincoln was the commanding general for the Union Army for the Civil War and he was also the one to approve of the Military Road to be built between Fort Howard near Green Bay, Wisconsin and Fort Wilkins in Copper Harbor, Michigan. As shown on the Old Military Road sign in the image below, Abraham Lincoln had signed an Act of Congress in March of 1863 which was the same year he had given the famous Gettysburg Address. The Union was clearly beating the Confederates midway throughout the war and had put an exclamation on it after defeating the Confederates at Gettysburg. The Union had a clear advantage at this point in the war with better access to resources and industry over the Confederates. This was the main reason for Lincoln to approve of this military road to be built from Fort Howard to Fort Wilkins. The State Historical Society of Wisconsin talks about the Old Military Road approved by Lincoln in 1863. It states that “In March 1863 affixed his signature to an act of Congress which enables the state of Wisconsin-Michigan to begin construction of the road”, which was to be placed from Fort Howard near Green Bay and Fort Wilkins near Marquette . Although it is not thought to be near Marquette today, this was to secure the copper and iron that was in abundance in the Upper Peninsula of Michigan for Union troop supplies.
The Military road wasn’t actually finished until 1872 which was long after the Civil War had ended thus making it quite obsolete for Union supplies for the war effort . Since the Union Army had a larger number of soldiers, they needed more supplies to win the war and crush slavery. These supplies came from all over the Northern colonies at the time and also from mineral-rich hot spots at the time, including the Midwest. Part of this movement was to get supplies fast, and this came about with the help of the Military Road. The road was built in 1864 to 1871 by James A. Winslow, Squire Taylor, and Jackson Hadley . In the early days, the road was used for transporting troops and supplies. This was the means of travel for settlers, trappers, hunters, explorers and later on by loggers . As it is seen, this road was not only important for the Civil War era, but also for years to come by others. It may even be argued that it was more useful for the settlers, trappers, hunters, explorers and loggers because it supplied for a mean of transportation. Without this road being built, the Upper Peninsula and Northern Wisconsin may not have been useful for its’ timber resources and valuable minerals.
The Resources of the Midwest
Without the placement of the resources in the Midwest, there may have been no use to venture into the Midwest in the first place during the Civil War. Several decades prior to the Civil War, the North was forced to delay and compromise several of its national economic political objectives . This was due to Southern opposition and the strong disagreement they held in the Senate. When the Southern states seceded, Congress began enacting this delayed agenda, which was reaching the Midwest for resources. “The Morrill Tariff of 1861, on average, raised rates to 20 percent which ended more than 30 years of declining tariffs”. This would allow the North to be dominant in many areas of production for Union troops and basic supplies. This is why the Military Road was built in the first place, for supplies to be ran from fort to fort and so that the North could gather more resources from places like the Upper Peninsula which was virtually untouched. The Upper Peninsula was full of raw materials such as timber and valuable metals, and was just being asked to be harvested.
The effect the mining has had on the entire area of the Upper Peninsula is tremendous. When driving through the Peninsula, the effects can still be seen, the land is dotted with old mining facilities. There are even names everywhere that indicates an old mining theme such as Iron Mountain, Iron Wood and street names such as Hematite. When the Civil War broke out in 1861, Marquette County and the rest of the U.P. (Upper Peninsula) sent troops, but they also sent something else important for the war effort- iron ore. Iron ore from the region was used for ammunition and to help build canons . Without this vital source of iron, who knows where we would be now? Iron Ore was originally discovered in the region in 1844 by a team of surveyors who was led by William A. Burt, a United States deputy surveyor . He actually discovered the ore because their compasses were being thrown off from the magnetic properties in the metals in the area. When the Civil War broke out, this caused a huge demand in iron ore. This is what actually created financially successful mining operations for the area because of the large demand needed for the war effort. “In 1861, the war began, with a curious result. The total of iron ore shipments from the region dropped to 49,909 tons for all the mines. This was down from a previous 114,401 tons of iron ore shipped in 1860 . This is interesting because of the demand increase. Many think that the drop of iron shipments was a direct result of men leaving the area to aid the Union in the war effort. “By 1865, which was the end of the Civil War, there were eight mines that remained in operation. The total shipments of iron ore totaled to 193,758 tons” . There is no doubt that the large success of the mining operations in the area were positively affected by the Civil War.
Transportation and Roles
Once the iron from the Upper Peninsula was discovered to be pure and usable, it had to be transported out of the U.P. in some manner. But how would all of this iron be transported out of the U.P.? It was proven that the only method of bringing iron ore to the lake from the mines was by means of sleighs in the winter time. It now became quite apparent that if any considerable business was to be done, the means of transportation would have to be improved . This is another reason why the Old Military Road was established around the civil war, so that iron could be delivered from the mines that were starting to form in the region. Obviously this wasn’t the only reason for the road to be built because as mentioned above, it was used for many things. Transportation was why the road was built and it also provided for other needs in the future. The Sault Canal was also a major feat during the century. The canal at Sault Ste. Marie was opened on June 18, 1855. “It was not until November 1, 1855 that the plank railroad was completed to the local mines” . The Military Road was important because advancements in transportation such as naval and railroads weren’t around until later on. This shows that the roadway played a large role in industrializing the U.P. and to help establish Fort Wilkins near Copper Harbor. Although the fort wasn’t active for many years, it showed how prepared the Union and then the United States was to defend the colonies and states at any moment and from any direction. It also showed how willing the U.S. was to protect what was already established.
1. Chicago Tribune Journalists. “Military Road to Lake Superior.” Chicago Tribune; Proquest Historical Newspapers, (1862) Pg. 2
2. The State Historical Society of Wisconsin Writers. “Proceedings of the Annual Meeting of the State Historical Society of Wisconsin.” The State Historical Society of Wisconsin, (1921) Pg. 103-104
3. Senate of Wisconsin. “Journal of the Senate of Wisconsin.” Senate of Wisconsin, (1867) Pg. 582
4. Jones, George O., McVean, Norman S., and Others. “History of Lincoln, Onieda, and Vilas Counties Wisconsin.” Wisconsin Historical Society, (1924) Chapter 6
5. Dahlquist, Elmer. “History of the Roads of Wisconsin.” The Lakeland Times, (2008)
6. Pohl, Dorothy G., Brown, Norman E. “The History of Roads in Michigan.” Association of the Southern Michigan Road Commission,(1997)
7. Gale Encyclopedia of U.S. Economic History. “Civil War and Industrial Expansion, 1860- 1897.” Encyclopedia.com, (14 Nov. 2016)
8. Boyle, Johanna. “Marquette Mining Journal.” Ore for the War-The Upper Peninsula in the Civil War, (2011)
9. Cleveland State University. “History of the Iron Ore Trade.” History of the Iron Ore Trade : The Cleveland Memory Project, (2016)
1. Dahlquist, Elmer. “History of the Roads of Wisconsin.” The Lakeland Times, (2008)
|
Early Years Foundation Stage (EYFS)
It may seem strange to think about your 3 or 4 year old child as a geographer. However, the years from birth to age five provide a first opportunity to see how your child interacts with their environment — and how the environment influences them. The early learning goals at EYFS aim to guide your child onto make sense of their physical world and their community by exploring, observing, and finding out about people, places, technology and the environment.
Key Stage 1
In Years 1 and 2, your child will be asked to begin to develop a geographical vocabulary by learning about where they live, as well as one other small area of the United Kingdom and a small area in a contrasting non-European country. They will learn about weather patterns in the United Kingdom and hot and cold areas of the world. They will use ICT, world maps, atlases and globes, simple compass directions, aerial photographs and plans, as well as simple fieldwork and observational skills. Schools have flexibility to choose the areas they teach and there is considerable variation between schools in their approaches.
Key Stage 2
In Years 3 to 6, the geography curriculum retains some flexibility, and builds and expands on previous knowledge. There are three focus areas:
Human and physical geography
Locational knowledge examines latitude, longitude and time zones. Your child will use maps to focus on Europe, North and South America, concentrating on regions, key physical / human characteristics, countries, and major cities. They will also work on locating the counties and cities of the United Kingdom, and start to explore their human and physical characteristics.
Children also examine geographical similarities and differences by comparing the geography of a region of the United Kingdom with a region in a European country, and with a region in either North or South America. This is part of the place knowledge aspect of the curriculum.
For human and physical geography, your child will be taught to describe and understand key aspects of geography, for example: climate zones, rivers, mountains, volcanoes, earthquakes, the water cycle, types of settlement, economic activity and the distribution of natural resources.
|
A Storied Evening
Lamp shades placed in corners of a newly bought flat,
Green pot plants in the finest porcelain,
Soothing instrumental music infused with imported incense,
Flowery candles floating in wood inlaid bowls,
Sky-blue curtains swaying in the wind,
Cozy cushions waiting here and there,
Fusion food ready on an elegant embroidered runner,
A storied evening ahead.
Learners should learn these words first. (They come from the second 1000 high frequency General Service Word List – GSL) as these are commonly used in everyday English, and have higher priority than less common words.
The words below are “Offlist” words, meaning that they are not so common in English and are not found on the high frequency word lists. Learners should only learn these if they know all the words above.
1. Write a short story in response to the poem. What kind of story will it be? As you prepare, make some brief notes about each of the following…
the setting (place and objects you can see)
- the characters (appearance, personality, manner, beliefs and attitudes)
- the key events (and the order of events)
- the time in which the events take place (past, present, future)
- anything else that is important
2. Now go back and add some words from the poem to your notes.
3. Begin writing your story. As you write, make sure to use some of the vocabulary from the poem.
4. Share your story with a partner, in English. Read your partner’s story.
5. Think about what might make each person’s story better. Use the table below to give your friend’s story some feedback. Give your feedback sheets to each other.
|Spelling and punctuation|
|
So far in our posts about ferrous sulfate, we have learned it is a chemical compound mainly used as a colorant, in the purification of water, as a soil amendment for plant growth, as well as in medicine. Knowing about ferrous sulfate is a great resource for manufacturers and others who deal with these applications. Although they may understand the products themselves, often users lack a background in how chemicals such as ferrous sulfate are manufactured. This post is about how ferrous sulfate goes from raw materials to a usable compound.
There are several methods through which ferrous sulfate can be produced:
- Being prepared commercially by the action of sulfuric acid on iron (1). This is the chemical reaction:
Fe + H2SO4 -> FeSO4 + H2
To clarify in plain English: combining iron with sulfuric acid creates a reaction to form ferrous sulfate and hydrogen gas.
- It can be generated as a by-product from pickling of steel.
Steel pickling refers to a treatment used to remove impurities, rust, and scale from the surface of a material. During hot working processes, an oxide layer (referred to as “scale” due to the scaly nature of its appearance) develops on the surface of the metal. Before most cold rolling processes, previously hot rolled steel goes through a pickling line to remove the scale from the surface and make it easier to work. To restore the best corrosion resistant performance, the damaged metal layer must be removed, exposing a fully alloyed stainless steel surface.
Pickle liquor removes oxide layer
To remove this oxide layer, the material is dipped into a vat of what is called “pickle liquor.” Pickle liquor can come in many forms. Carbon steels with an alloy content of less than 6% are often pickled in hydrochloric or sulfuric acid. For steels that have a higher carbon content, a two-step pickling process is required, with additional acids used (for instance, phosphoric, hydrofluoric, and nitric acid). (2)
For the purposes of the manufacturing of ferrous sulfate, we are going to focus on the pickling of steel using hydrochloric acid. By performing the pickling process, which combines hydrochloric acid with steel, which is made from iron, ferrous sulfate is created as a byproduct (see reaction equation from method 1).
- It is available as a by-product of the manufacture of titanium dioxide, a chemical often used in paint, sunscreen, and food coloring; coincidentally similar uses to ferrous sulfate.
The sulfate process
Specifically, the process used to create titanium dioxide is called the sulfate process. This involves dissolving ilmenite which is a black iron-titanium oxide, then forming hydrated titanium dioxide, and finally forming anhydrous titanium dioxide. This is the chemical reaction:
FeTiO3 (s) + 2H2SO4 (aq) -> TiOSO4 (aq) + FeSO4 (aq) + 2H2O (l)
In plain English, combining ilmenite with sulfuric acid creates a reaction to form titanyl sulfate, ferrous sulfate, and water. (3)
In short, ferrous sulfate is available in several forms:
- Official USP tablets which contain 300 mg ferrous sulfate.
- Dispensed as pills or tablets which are coated to protect them from moisture. Salt and glucose or lactose are mixed to protect the pill against oxidation.
- Available in heptahydrate (20% iron) and monohydrate (30% Fe) grades.
- For larger commercial or industrial applications, it is available in bulk aqueous truckloads.
The Affinity Process for Manufacturing Ferrous Sulfate
Affinity Chemical produces aqueous ferrous sulfate in our Fort Smith, Arkansas, location. As with all of Affinity’s production processes, our goal is to provide the compound with both cost and time efficiency in mind. We are committed to manufacturing ferrous sulfate near our customer locations so that their freight costs are lower.
Generally, we produce strengths at 5.5% ferrous (the most common) or 7%, depending on our customers needs. We test each batch after production for ferrous content, therefore ensuring the customer is obtaining exactly what they have requested.
Aligned with our views on environmental protection, our process produces no emissions and no waste products.
|
Houston, we have ants: Mimicking how ants adjust to microgravity in space could lead to better robots, Stanford scientist says
Professor Deborah Gordon recently sent hundreds of ants to the orbiting International Space Station. By studying how the ants adjust their behavior to cope with near-zero gravity conditions, scientists could improve the algorithms autonomous robots follow to search disaster scenes for survivors.
Several hundred ants have boldly gone where no ants have gone before: the International Space Station, high above Earth.
This past Sunday, an unmanned supply rocket delivered 600 small black common pavement ants to the ISS. Their arrival marked the beginning of an experiment designed by Deborah Gordon, a professor of biology at Stanford, to determine how the ants, in these exotic surroundings, adapt the innate algorithms that modulate their group behavior.
The information that Gordon and her colleagues glean from the ants' behavior has the potential to help us understand how other groups, like searching robots, respond to difficult situations.
An ant colony monitors its environment – whether to identify a threat, find food or map new terrain – by sending out worker ants to search the area. Because most ants have poor vision, and all ants rely on smell, an ant has to be close to something to detect it. Further complicating matters, no single ant is in charge or coordinating the search. So how do they know how best to search?
Ants communicate primarily by contacting each other by smell and touching antennae. Over millions of years, ants have developed algorithms that use the frequency with which these interactions occur to determine how many ants are in their area and, from that, how thoroughly they should conduct their search.
When antennae-to-antennae interactions occur frequently, the ants sense that the area is densely populated, and they circle around in small, random paths to gather robust information about their immediate area.
If the frequency of ant-to-ant interactions is low, however, the ants search in an entirely different manner. Instead of searching in small circles, they walk in straighter lines, giving up thoroughness in favor of covering more ground.
This technique is known as an expandable search network.
Ants aren't the only animals to work out such algorithms. Humans have developed the same sort of protocols to govern how cellphone networks relay signals, or how a fleet of autonomous robots can search a building without the guidance of a central controller.
Like all networks, human-created networks have to deal with disruption. For example, if robots enter a burning building to assess damage or search for survivors, flames, smoke and other elements could interfere with communications between the 'bots and impede the search.
Scientists are developing workarounds for these situations, but Gordon said that ants have already found solutions for conditions where information is not perfect.
Biology Professor Deborah Gordon recently sent hundreds of ants to the International Space Station to study how they adapt the innate algorithms that modulate their group behavior.
In the space experiment, 70 ants were released into each of several small arenas roughly the size and shape of a tablet computer. The arena was divided into three sections, and video cameras tracked the ants' searching patterns as the barriers were lowered, increasing the search area and thus decreasing the density of ants in the arena.
On Earth, Gordon said, ants adjust their search behavior as the arena expands by shifting from the small, circular search routine to straighter, broader paths, thus expanding the search network.
Performing the same experiment in microgravity is a way to introduce interference that is analogous to the radio disruption that robots might experience in a blazing building. In microgravity the ants struggle to walk, which in turn disrupts the ants' ability to bump into each other and share information.
Observing how the space ants modified their search behavior when the loss of gravity interfered with their interactions, and their ability to assess density, could inform researchers how to design similar flexible protocols for robots and other devices that rely on expandable search networks.
"We have devised ways to organize the robots in a burning building, or how a cellphone network can respond to interference, but the ants have been evolving algorithms for doing this for 150 million years," Gordon said. "Learning about the ants' solutions might help us design network systems to solve similar problems."
Now, Gordon and her colleagues will carefully analyze video of the ants in space and compare it to a control experiment conducted on Earth to see how the struggle with microgravity forced the ants to change their searching behavior.
Additionally, the researchers will invite K-12 students to replicate the experiment in their earthbound classrooms. Starting this spring, when the weather is warmer and ants are easy to collect outside, students will be able to repeat the experiment and enter their results in a database, which Gordon said could provide valuable insights.
"There are 12,000 species of ants, and some species will perform better than others in this experiment," Gordon said. "For example, invasive ants find their way into our kitchens because they're very good at searching. Comparing results from student data will allow us to look at different search strategies of the ants in different places on Earth."
The ants will live out the remainder of their days on the space station. In the meantime, astronauts should not fear an infestation: Only sterile worker ants were sent on this mission.
Deborah Gordon, Biology: (650) 725-6364, [email protected]
Bjorn Carey, Stanford News Service: (650) 725-1944, [email protected]
|
It's 11 O'Clock. Do You Know Why Your Teenager's Still Asleep?.
Our 10-week course is geared towards parents and caregivers.
*topic sequence may change
Executive Functions: Our Brain's Conductor
The first introduction to executive functions and how they impact learning. We'll address setting up a study area for better learning and organization, how to promote independence in your student, and how to identify and manage school-related stress and anxiety.
How to Make it Stick: The Science of Learning and Why What We're Doing Now Doesn't Work
Week 2 focuses on research-based practices for improving learning and study skills, specifically how to prepare for tests and quizzes using active study strategies such as self-testing and practice. How to make learning "stick" and why what most people do doesn't really work.
Sleep, Nutrition, Exercise, and Calming the Mind: The Ultimate Happy Pill
Research shows nutrition, sleep routines, and stress all affect the developing brain. We give practical strategies to overcome challenges such as: refusing to cooperate in the morning, problems from not listening or not paying attention, 'losing it' over seemingly small situations, and resisting sleep or having difficulty sleeping.
Dropping the F-bomb: Why Failure is My Favorite Word
A look at why embracing failure and allowing children to make mistakes is a critical part of the learning process. Why being too highly responsive to a child's needs (helicopter parenting) can create fear of failure, where a child won't challenge him/herself academically.
How to a Build A Child's Capacity for Literacy & the Language of Math
We explore the science behind reading and how the neural basis for reading changes as we grow. Kids can make deeper meaning and basic inferences from what they read through concrete strategies. We will explore the language of math and the importance of reading comprehension for strong math skills.
The Teenage Brain: Yes! Your Teen is (Driving You) Crazy!
Between hormones, neurotransmitters, and the brain's rewiring, the teenage years often create a period of stress and emotional upheaval. However, these years also offer a wonderful opportunity to harness the brain's restructuring and strengthening connections that lead to empowerment. Learn how this rewiring affects mood changes, strange responses, irrational reactions, and other odd behaviors that are completely normal and necessary to grow into adulthood.
Never Good Enough: Cultivating a Growth Mindset
We will examine the impact of negative thoughts in kids and the importance of developing a growth mindset. Learn practical ways children can change the internal dialogue about not being good enough and language that encourages a growth mindset.
How the Brain Learns: A Deeper Dive Into Executive Functions
Continued discussion of executive functions and how the brain learns. Week 8 explores children with special needs, high intelligence, and learning disabilities. Learn tools to promote self-control through verbalization and the neural basis for these behavior patterns. Further discussion and tools for avoiding 'helicopter' parenting.
How to Calm Your Brain: Is stress our friend, foe or frenemy:
Are you stressed? Do you get stressed thinking about stress? Does the need to relax actually stress you out more? In our hyperbusy society, we all know a calm brain is more productive and supports a healthy lifestyle, but how can we achieve it? And what does that even mean? We will investigate healthy stress, behaviors that support a healthy sympathetic and parasympathetic nervous system, and how to get out of our own way as we work towards our goals.
Defrag Your Brain: How to Manage Your Mind in a Distracted World
Further discussion of executive functions including the impact of anxiety and stress on children. How to identify stress and how children can calm their brain. Adults will learn to model a calm brain, even when faced with difficult situations. Topical discussion of distraction and its impact on productivity and effectiveness. Why we struggle with distraction, the impact of technology, and how to take cognitive control by simplifying and incorporating 'old school' activities.
|
War creates veterans and societies are reminded by their existence that violent conflicts had been waged in the past. Even when the wars have been long forgotten by many, veterans are the ones whose fate has been tied to war and destruction.
Societies often struggle with their veterans, especially when they have to address the former soldiers’ traumatic experiences and acknowledge the wounds that hurt beyond the body. While veterans are a steady reminder of violent conflicts of the past, they are often ignored by their societies, once peace is achieved. Nevertheless, veterans play an important role in post-war contexts as well and this role, their influence and impact in the supposedly non-violent world need to be addressed. This volume discusses the role of veterans in the aftermath of war and shows how they had been treated by their societies and how the latter ones tried to reintegrate them into their own narratives of the past.
War Memorials were an important element of nation building, for the invention of traditions, and the establishment of historical traditions. Especially nationalist remembrance in the late 19th century and the memory of the First World War stimulated a memorial boom in the period which the present book is focusing on.
The remembrance of war is nothing particularly new in history, since victories in decisive battles had been of interest since ancient times. However, the age of nationalism and the First World War triggered a new level of war remembrance that was expressed in countless memorials all over the world. The present volume presents the research of international specialists from different disciplines within the Humanities, whose research is dealing with the role of war memorials for the remembrance of conflicts like the First World War and their perceptions within the analyzed societies. It will be shown how memorials – in several different chronological and geographical contexts – were used to remember the dead, remind the survivors, and warn the descendants.
|
Off the coast of Washington, pillars of bubbles rise from the bottom of the sea, as if a dragon were sleeping there. But these bubbles are methane, which is squeezed out of the sediment and rises up through the water. The places where they appear provide important clues to what will happen during a major marine earthquake.
The first large-scale analysis of these gas emissions along the Washington coast reveals more than 1,700 bubble plumes, mainly grouped in a north-south strip about 30 miles (50 kilometers) offshore.
Scientists discovered the first methane emissions on the outskirts of Washington in 2009 and thought they were lucky to find them at that time. But since then this number has only grown exponentially.
The results show that gas and liquid rise through faults generated by the movement of geological plates that produce large marine earthquakes in the Pacific Northwest.
These vents are fickle, like geysers in Yellowstone.
Sometimes they turn off and on with the tides, and they can move a little along the seabed.
They are usually found in clusters within a radius of about three football fields.
The authors analyzed data from numerous research expeditions over the past decade that use modern sonar technology to map the seabed. Their new results show that more than 1,778 bubble methane plumes emerging from the waters of Washington state are grouped into 491 clusters.
The vast majority of the newly observed methane plume sections are located on the sea side of the continental shelf, at a depth of about 160 meters.
A previous UW study suggested that warming sea water could release frozen methane in this region, but further analysis showed that methane bubbles off the Pacific Northwest coast originate from sites that have been present for hundreds of years and are not associated with global warming.
To understand why methane bubbles arise here, the authors used archival geological surveys conducted by oil and gas companies in the 1970s and 1980s. they show fault zones in bottom sediments where gas and fluid migrate upward to reach the seabed.
These seismic studies in areas with methane emissions indicate that the edge of the continental shelf will be pushed to the West during a large earthquake of magnitude 9. Faults at this tectonic boundary provide permeable paths for methane gas and warm liquid to escape from deep sediments.
If this new hypothesis proves true, then it has serious implications for understanding how this subduction zone works.
A study by the University of Washington and Oregon State University was published in the journal Geophysical Research: Solid Earth.
|
How many Ways to Create an Object in Java
There are five different ways to create an object in Java:
1) Java new Operator
This is the most popular way to create an object in Java. A new operator is also followed by a call to constructor which initializes the new object. While we create an object it occupies space in the heap.
Example of Java new Operator
2) Java Class.newInstance() method
Java Class.newInstance() is the method of Class class. The Class class belongs to java.lang package. It creates a new instance of the class represented by this Class object. It returns the newly created instance of the class.
It throws IllegalAccessException if the class or its nullary constructor is not accessible. It also throws InstantiationException, if the Class represents an abstract class, an interface, an array class, or a primitive type.
3) Java newInstance() method of Constructor class
Java Constructor class also has a newInstance() method similar to newInstance() method of Class class. The newInstance() method belongs to java.lang.reflect.Constructor class. Both newInstance() method are known as reflective ways to create object. In fact the newInstance() method of Class class internally uses newInstance() method of Constructor class. The method returns a new object created by calling the constructor.
The newInstance() method throws the following Exception:
4) Java Object.clone() method
Java clone() method creates a copy of an existing object. It is defined in Object class. It returns clone of this instance. The two most important point about clone() method is:
When we use clone() method in class, the class must call super.clone() to obtain the cloned object reference.
The method throws the CloneNotSupportedException if the Object class does not support the Cloneable interface. This exception also throws when subclass that overrides the clone() method indicates that instance cannot be cloned.
5) Java Object Serialization and Deserialization
A class must implement Serializable interface which belongs to java.io package. The Serializable interface does not have any method and field. They add special behavior to the class. Marker interface does not used in Java 8. It is replace by Annotations.
JVM creates a separate space whenever we serialize and deserialize an object. It does not use any constructor to create an object.
The ObjectOutputStream class is used to serialize an object. The Serialization is a process of converting an object into a sequence of bytes.
The writeObject() method of ObjectOutputStream class serialize an object and write the specified object to the ObjectOutputStram class. The signature of the method is:
The method accepts an object as a parameter.
The process of creating an object from sequence of bytes is called object deserialization. The readObject() method of ObjectInputStream class read an object from the ObjectInputStram class and deserialize it. The signature of the method is:
The method does not accept any parameter. It returns an object read from the stream. The method throws the following exceptions:
In the following example we have first serialized the object and then deserialized the object.
Concept of cloning in Java
In OOPs, copying an object means creating a clone of an existing object. There are many ways to copy an object; two of them are- copy constructor and cloning. There are two types of cloning in Java:
Both deep and shallow copy are types of object cloning. When we talk about an object, we consider it as a single unit which cannot be broken down further.
Suppose we have a Student object. The Student object contains other objects, as in the following figure. The Student object contains Name and Address objects. The Name contains FirstName and LastName objects, and the Address object is composed of a Street and a city object. When we talk about Student, we are talking about the entire network of objects.
A clone of an object is created when we want to modify or move an object while still preserving the original object.
For example, if we want to create a shallow copy of the Student, we should create a second object of Student. But both objects share the same Name and Address. Consider the following Example:
A disadvantage of the shallow copy is that the two objects are not independent. When we modify the Name object of one Student, it modifies the other Students objects too.
In the following example, we have a Student object with a reference variable mba; then we make a copy of MBA, creating a second Student object, mca. If mca tries to moveOut() by modifying his Address object, the mba moves with it.
It is because mba and mca objects shares the same Address object. If we change the Address in one object, it modifies both.
When we modify the Address object of one Student object, it does not modify the other Student object. In the following code we can see that we are not only using a copy constructor on Student object, but we are also using copy constructor on the inner objects as well.
To create a deep clone, we need to keep copying all the Student object nested elements, until there are only primitive types and Immutable left.
The Street object has two instance variable name and number. The number is a primitive integer value, not an object. It cannot be shared. When we create a second instance variable, we are automatically creating an independent copy. In the above code String is an immutable object i.e., once created, can never be changed again. Hence, we can share it without creating a deep copy of it.
|
New proof gathered from Antarctic seashells confirms that Earth was already unstable earlier than the asteroid effect that worn out the dinosaurs.
The research, led by researchers at Northwestern University, is the primary to measure the calcium isotope structure of fossilized clam and snail shells, which date again to the Cretaceous-Paleogene mass extinction event. The researchers discovered that — within the run-as much as the extinction event — the shells’ chemistry moved in response to a surge of carbon within the oceans.
This carbon inflow was doubtless as a result of lengthy-time period eruptions from the Deccan Traps, a 200,000-sq.-mile Volcanic province situated in fashionable India. Through the years main as much as the asteroid influence, the Deccan Traps spewed large quantities of carbon dioxide (CO2) into the ambiance. The focus of CO2 acidified the oceans, instantly affecting the organisms residing there.
The examine will likely be revealed within the January 2020 subject of the journal Geology, which comes out later this month.
He’s now a postdoctoral scholar at the University of Wisconsin-Madison within the Department of Geoscience.
Earlier research has explored the potential results of the Deccan Traps eruptions on the mass extinction occasion; however, many have examined bulk sediments and used completely different chemical tracers. By specializing in a particular organism, the researchers gained a more exact, greater-decision file of the ocean’s chemistry.
Seashells largely are composed of calcium carbonate, the identical mineral present in chalk, limestone, and a few antacid tablets. Carbon dioxide in water dissolves calcium carbonate. Through the formation of the shells, CO2 probably impacts shell composition even without dissolving them.
For this examine, the researchers examined shells collected from the Lopez de Bertodano Formation, an effectively-preserved, fossil-rich space on the west facet of Seymour Island in Antarctica. They analyzed the shells’ calcium isotope compositions utilizing a state-of-the-artwork approach developed in Jacobson’s laboratory at Northwestern. The tactic entails dissolving shell samples to separate calcium from numerous different components, adopted by evaluation with a mass spectrometer.
|
It's not quite human cloning, but it's close. Researchers reported using a variation of somatic cell nuclear transfer (SCNT) the same technique that created Dolly the sheep, the first mammal to be cloned, from a skin cell of a ewe on human cells. SCNT involves replacing the genetic material of an egg cell with the DNA from a mature cell (a skin cell, for example). The egg is then stimulated to divide, and if it develops fully, produces a genetically identical clone of the animal from which the mature cell was taken.
In the latest study, reported in October, scientists at the New York Stem Cell Foundation modified the technique, combining the DNA of an adult human cell with the genetic material of an egg rather than replacing the egg's DNA. Previous attempts to clone human cells using SCNT had failed, but something about keeping the egg's DNA appeared to facilitate cell division and allow the generation of stem cells. The stem cells weren't quite normal, however, since they contained an extra set of chromosomes, from the egg. Next, the researchers hope to find a way to silence or eliminate the extra set of DNA.
The process is promising because it can potentially yield stem cells which may one day treat diseases such as spinal cord injury and Parkinson's that not only match their donor, but also obviate the need for an embryo.
|
When we think of ferrites, discovery and technological progress are probably the last words that come to mind. Yet, the discovery of hard ferrites or magnet material gave ancient navigators the “lodestones” that they needed to locate magnetic north. The properties associated with hard ferrites triggered the curiosity that led to the early research into electromagnetism by Oersted, Faraday, Maxwell, and Hertz. During the 1930s and 1940s, more research led to the commercial production of soft ferrite passive components that enabled progress with inductors and antennas.
What’s Right About Ferrites?
Today, power applications and our continual quest to suppress electromagnetic interference depend on the use of highly permeable soft ferrites that consist of ceramics blended from a wide range of metal oxides that form a magnetic core. The operating characteristics of ferrites depend on kinds and ratios of the metal oxides. Most ferrites consist of either a manganese-zinc blend (Mn-Zn) or a nickel-zinc (Ni-Zn) mix. Because Mn-Zn ferrites have high permeability and a low specific resistance, the ferrites remain limited to frequencies of 1MHz or less. Ni-Zn ferrites have low permeability and a high specific resistance and work well for noise suppression.
With soft ferrites, an increase in the magnetic field results in a flux density flow and magnetization occurs. If a magnetic field applies in the opposite direction, the ferrite goes back to its original condition. If a soft ferrite has a strong magnetic property, a small change in the magnetic field causes a large change in flux density.
One of the key properties for ferrites used for noise suppression is permeability. Ferrite materials that have higher permeability allow magnetic flux to pass more easily than if the flux traveled through the air. Permeability in ferrites increases as temperature increases. However, after the permeability reaches a maximum level at certain temperatures, ferrite loses permeability. While permeability changes with temperature, it remains constant up to a given frequency. In most instances, ferrite materials that have high permeability work best for high frequency circuits.
Working with ferrites requires an understanding of the types of circuits they work well on.
Because the impedance of a ferrite changes as the load current and voltage drop change, ferrite clamps and chokes function as non-linear components. The impedance of the devices becomes highly resistive for a thin band of frequencies. High frequencies cause the impedance to decrease as the ferrite becomes more capacitive than resistive. Increasing the frequencies beyond a particular threshold causes the capacitive impedance to decrease and the impedance becomes resistive.
We can use soft ferrite chokes and clamps to reduce radio frequency interference (RFI) in an electrical conductor. For that reason, ferrite chokes can attenuate interference for switched mode power supplies. Ferrite chokes--or beads--attenuate high-frequency EMI in a circuit by working as a low-pass filter. Only low frequency signals pass through a circuit. Unlike a traditional low-pass filter that works for a wide band of frequencies, ferrite chokes and clamps only attenuate frequencies that occur within the ferrite’s resistive band. Wirewound ferrite chokes provide more design flexibility with a high magnitude of attenuation over a wide frequency range, lower dc resistance, and higher current ratings. While chip ferrite beads offer value, the devices have a limited attenuation and frequency range.
When we place two halves of ferrite around a conducting wire--such as a power cable, we have a ferrite clamp that provides an inductive impedance for signals traveling through the cable. Ferrite clamps and chokes follow Faraday’s Law in that a magnetic core place around a conductor induces a back electromagnetic force (EMF) in the presence of a high frequency signal. Given the high permeability of ferrite, the material offers less resistance than to the flow of magnetic flux in the conductor and--as a result--the ferrite absorbs noise.
Match the Best Ferrite Choke or Clamp to the Application
Selecting a ferrite choke or clamp depends on the source of the EMI and the range of unwanted frequencies because the undesired frequencies must match with a resistive band of the choke or clamp. Along with matching the choke or clamp to frequency requirements, the rated dc current of the device must match with the currents seen in the circuit. If the circuit current increases beyond the rated current, saturation occurs and the choke or clamp will become 50% less inductive and have impedance reduced by 90%. With those changes, the choke or clamp cannot suppress EMI.
Because ferrite chokes and clamps are resistive, the devices can cause voltage drops in a circuit. The resistive properties of ferrite devices also may cause unwanted heating as high frequency energy dissipates. Always check the manufacturer’s specifications for the maximum DC current, and the DC resistance rating. Any ferrite choke or clamp must have a DC current rating more than twice the value of the required current for the rail. Design teams can use PCB design software and design rules to determine the correct placement for chokes to avoid voltage drop issues.
Ferrite materials covering components can help avoid interference related issues.
The manufacturer’s specifications for ferrite chokes or clamps include impedance versus load current curves that show the characteristics of the devices at specific currents. The manufacturer’s specifications for ferrite chokes and clamps also show impedance versus frequency response. In addition, most manufacturers include equivalent circuit models for ferrite chokes and clamps that work for system simulations.
Because ferrite chokes are inductive and capacitive, circuit designs must also account for “Q” or the reactance of the inductor divided by the ac or rf resistance plus any dc resistance found in the choke windings. Chokes that have a high Q can create unwanted resonance in power isolation circuits. PCB design software provides the analytic tools to find the approximate value of ferrite choke inductance and to determine the resonant frequency cutoff.
The suite of design and analysis tools available from Cadence is here to help minimize the difficulty of your manufacturing process. Utilizing OrCAD PCB Designer is a great way to give your designers the layout capacity they need to finalize designs in a seamless manner.
If you’re looking to learn more about how Cadence has the solution for you, talk to us and our team of experts.
About the AuthorFollow on Linkedin Visit Website More Content by Cadence PCB Solutions
|
The Science of Sleep and Learning
Head Primary School Teacher and Grade 4A Homeroom Teacher
All parents know that good sleep is important for their child’s health. This knowledge can easily be forgotten, though, in our busy modern lifestyles. Good sleep, both of our own and that of our children, is often one of the first things to be neglected when we have too much to do. Recent scientific research is discovering that quality sleep is crucial to our overall physical, mental, and emotional health. For children, quality sleep may mean the difference between normal and abnormal development and learning. Parents need to take sleep very seriously. Many people know that good sleep is necessary for our bodies and brains to rest and reset. Scientists are now discovering that sleep is needed for much more than just rest. It is during sleep that our brains sort through the day’s experiences and record the important ones as new memories. Sleep is thus essential to learning. Sleep is also when our brains rid themselves of waste chemicals. Human growth hormone, which is crucial to a child’s physical development, is primary released during sleep.
Inadequate sleep has been linked to numerous physical and mental health issues. People who don’t sleep enough have a higher risk of obesity, diabetes, and depression. Inadequate sleep can lead to attention hyperactivity issues in children. Children who don’t sleep enough have difficulty regulating their emotions properly, which can lead to problems in many aspects of their lives. And, as any teacher can tell you, sleep deprived children simply don’t learn as well.
The National Sleep Foundation in the United States offers the following science-based recommendations for how much children should sleep. Preschoolers should get between 10 and 13 hours of sleep per day, children between the ages of 6 to 13 should get 9 to 11 hours of sleep per day, and teenagers should get 8 to 10 hours a day.
Besides getting enough sleep, children and parents should also practice good “sleep hygiene” to make getting to sleep easier. Electronics and televisions should be removed from the bedroom. Because the blue light emitted from the electronic devices interferes with the brains’ ability to fall asleep, children should not use any screens (tablets, phones, computers, TVs) at least 30 minutes before bed. Practices like meditation and mindfulness can help prepare children fro bedtime. Snacks before bedtime should be high protein (nuts, unsweetened yogurt, peanut butter, cheese and crackers) instead of high sugar. Finally, form a regular bedtime routine and stick to it.
Good sleep and good learning go hand in hand. Make getting good sleep a priority for both your children and yourself. More information about the importance of sleep can be found at https://sleepfoundation.org.
Geography Bowl 2017
Grade 1A Homeroom Teacher
Its a Jungle Out There!!
Mid Term Show – Oct. 18
October 18 (Wednesday) at 1:15pm @ the Auditoirum
Grade 4 Plant Dissection
Grade 4B Homeroom Teacher
Grade 5 Mold Terrarium
Grade 5B Homeroom Teacher
These were a few of the inquiries that the fifth graders investigated over the past few weeks. These inquiries took us on a tour of the web of life as we explored a handful of different ecosystems and how energy travels through each of them. In our mold terrariums, students observed how fungi decompose matter. Decomposers, they learned, play an important part in the web of life. They break down matter so that it can be recycled, and the energy cycle can start all over again.
Grade 3 Puppet Show
Grade 3B Homeroom Teacher
Grade 5 Field Trip to Metro Forest
Grade 5A Homeroom Teacher
October 13th – Rama IX Memorial Day (No school)
October 18 – Its a Jungle Out There Mid-Term Show
October 23 – 27 – Midterm Break
October 30 – Classes Resume
October 31 – Halloween Trick or Treat at School
|
The particle films are applied to crops to manage environmental challenges like high heat or sunburn, or manage pests, such as aphids or psyllids. Particle films can reduce infestations because the particle films hide the natural plant colors that help some insects find their plant host.
2-year old ‘Hamlin’ sweet orange in Florida with red or white particle films
How are particle films being used to help fight against huanglongbing (HLB)? Visit the Research Snapshot page to learn more: https://ucanr.edu/sites/scienceforcitrushealth/Research_Snapshots/Vincent
About Research Snapshots
We have developed short, descriptions of research projects that aim to help in the fight against HLB. These projects include traditional breeding and genetic engineering to create resistant citrus varieties, psyllid modification, using other organisms to deliver HLB-resistance genes, and early detection of the bacterium in trees.
|
First grade is a time for students to grow their vocabulary. Students begin to move beyond sight words for the first time spelling more complex words. By the end of first grade, your child can write and describe school, home, and the environment. You will be amazed at how much your child can spell!
At LoonyLearn, we want students to get excited about spelling through gaming. Students engage in learning through our fun spelling games made for young learners. Your child will fight pirates and use magic to spell three-letter to five-letter words. As your child plays their confidence grows and so does their spelling ability.
Our first-grade spelling word lists are meant to help your student as they begin to read more fluently. We created our lists to include similar structured words, for example, mop, hop, and top. In first grade, each of our lists also include common words. These common words appear frequently in Level 1 books and reading proficiency tests.
Here are some example first-grade lists:
|-ell Words and Common Words||-ack Words and Common Words||-i-e Words and Common Words|
Each of the games below can be played with any of our LoonyLearn spelling lists and lists you create yourself. Just select the list you want to learn then choose the game you want to play.
Oh no! The spell-animals are out of their cages! Help the zoo keeper dorp the good and bad spell-animals back to their correct cages.
In this game, the player is given a word and must decide if the word is correctly or incorrectly spelled. The player then places the word into the “good” or “bad” cages.
There are two kinds of water spelloons, good spelloons and mis-spelloons. Pop all the good water spelloons and move on to the next level.
In this game, the player is given six words and must select which of the words are correctly spelled. The player places the slingshot in front of the correctly spelled words to select them.
Willy the Wizard loves casting spells but he can’t spell words. Help Willy put the letters back in order to spell the spelling word. Tap on the letters to put them back in order.
In this game, the player must use the wand to capture the letters in the correct order. Spell the entire word correctly to move to the next level.
Oh no! The mean monsters have mixed up all the letters and hid them around the maze. Help Molly the Monster find all the letters in the correct order to spell the spelling word. Make sure to watch out for the mean monsters! Ue your arrows to move around the cave.
In this game, the player navigates the maze to collect letters to spell the word. The player must avoid the purple monsters and the walls to grab the letters.
Help Freddy the Frog move around the busy road and river to collect letters in the correct order to spell the spelling word. Watch out for the cars and water!
In this game, the player uses the arrows to guide the frogs across the street to collect letters. The player must avoid the cars to spell the words.
Oh no! The aliens are attacking! Shoot down the correct letter and all the alien pods will explode.
In this game, the player moves the defender ship with the control arrows to shoot the correct letter. The player must spell the word letter-by-letter. Make sure to avoid the lasers from the alien invaders!
Help Speedy the dinosaur move around the race track to collect letters in the correct order to spell the spelling word. Watch out for the bad letters!
In this game, the player moves the car around the track with the arrows to collect the letters. The player must collect the letters in the correct order while avoiding the incorrect letters.
The Spell-Adactyl is hungry, but he can’t see the difference between good letters and bad letters. Help the Spell-Adactyl eat all the good letters to spell the spelling word.
In this game, the player moves the Spell-Adactyl using the arrows to collect the letters in the correct order. The player must collect the letters in the correct order while avoiding the incorrect letters.
|
Percents and ratios. Introduction to percents. What is a percent? Well, fundamentally, a percent is a fraction. The word percent comes from the Latin per centum, which means per 100. Similarly, even the percent sign can be thought of as a stylized version of divided by 100. Show Transcript
So that looks vaguely like that. Thus, percent means divided by 100, and 37% means the fraction 37 over 100 or the decimal 0.37. Similarly, 0.03% means the fraction 0.03 over 100, which of course is 3 over 10,000, or the decimal 0.0003. As you see, many of the rules covered in the decimal videos, especially multiples of ten, are relevant here.
And if what we're doing here, moving the decimal point back and forth, if this is something that is not familiar to you, I highly recommend watch the Multiples of Ten video before you watch the rest of this video. Because the rest of this video is not going to make much sense if you don't understand how to multiply and divide by 100, and move the decimal place around.
So, talking about that, changing from percents to decimals. This is simply dividing by 100, so we move the decimal point two places to the left. Here we have some percents, we want to change them to decimals, we move two places to the left. In some cases, we have to insert placeholding 0s. Changing from decimals to percents.
Here, we're, we're doing the opposite, un-dividing by 100, which is essentially multiplying by 100. Thus, we move the decimal point two places to the right. We have several decimals here. We're gonna move two places to the right. Notice that the final one, if we have a decimal greater than 1, it becomes a percent greater than 100%.
Changing from percents to fractions. This is easy. We just have to put the percent over 100. After that, we may have to simplify a little bit. So, for example, 20%. That''s 20 over 100 which is one fifth.
92%, 92 over a 100, which is 23 over 25. 0.02%, which is is 0.2 over 100, where 2 over 10,000, and that simplifies to 1 over 5000. So all three of them very easily become fractions. Changing from fractions to percents. This is trickier unless you know the fraction-to-decimal conversion discussed in Conversions: Fractions and Decimals.
So again, if you're not familiar with that particular video and those concepts are not familiar, please watch that and then come back and watch the rest of this video cuz this video is not gonna make a whole lot of sense if you don't know those conversions. Here we have some fractions. We want to change these to percents.
In order to change them to percents, first we're going to change them to decimals, and we know that we can approximate three-eighths as 0.375. We can approximate two-thirds as 0.666 repeating. We'll write it here as 0.6667. Once we have them in decimal form, we just slide the decimal place two places over to get a percent.
Of course, for fractions with 100 or 1000 in the denominator, it's very easy to change to a decimal, which would give us a percent. So, for example, 59 over 100. Well, that obviously just becomes 59%. 17 over ten, over 1,000. That becomes 0.017 and we can write that as 1.7%.
Those recommendation are for exact conversions from fractions to decimals. Often on the test, we need to approximate percents, from fractions or from division. So, for example, 8 over 33. Suppose we multiply the numerator and the denominator by 3, then we'd get 24 over 99.
Well, 24 over 99 is going to be slightly larger than 24 over 100. Of course, when we make the denominator larger, we make the fraction a little bit smaller. 24 over 100, of course, is 24%, so 8 over 33 is gonna be slightly larger than 24%. That's a very good approximation. 11 over 14.
Here we can multiply the numerator and denominator by 7, and we'll get 77 over 98. And, of course, that's gonna be slightly larger than 77 over 100, which is 77%. So 11 over 14 is gonna be something slightly larger than 77%. That's also an excellent approximation. So in summary, we talked about what a percent is, we talked about changing between percents and decimals, changing back and forth.
We talked about changing back and forth between percents and fractions. And we talked about the very important topic of approximating fractions as percents.
|
Library of galaxy histories reconstructed from motions of stars
The CALIFA survey allows to map the orbits of the stars of a sample of 300 galaxies, a fundamental information to know how they formed and evolved.
Just like the Sun is moving in our Galaxy, the Milky Way, all the stars in galaxies are moving, but with very different orbits: some of the stars have strong rotations, while others may be moving randomly with no clear rotation. Comparing the fraction of stars on different orbits we can find out how galaxies form and evolve. An international team of astronomers has derived directly, for the first time, the orbital distribution of a galaxy sample, containing more than 300 galaxies of the local universe. The results, published in Nature Astronomy, are based on the CALIFA survey, a project developed at Calar Alto Observatory.
Galaxies are the largest structures in the universe, and scientist study how they evolve to understand the history of the universe. Galaxy formation entails the hierarchical assembly of halos of dark matter (a type of matter that has not been directly observed and whose existence and properties are inferred from its gravitational effects), along with the condensation of normal matter at the halos´ center, where stellar formation takes place. Stars that formed from a settled, thin gas disk and then lived through dynamically quiescent times will present near circular orbits, while stars with random motions are the result of turbulent environments, either at birth or later, with galactic mergers.
Thus, the motions of stars in a galaxy are like a history book, they record the information about their birth and growth environment, and it may tell us how the galaxy was formed. "However, the motion of each single star is not directly observable in external galaxies. External galaxies are projected on the observational plane as an image and we cannot resolve the discrete stars in it" -says Ling Zhu, researcher from the Max Planck Institute for Astronomy who leads the study-. "The CALIFA survey uses a recently developed technique, integral field spectroscopy, which can observe the external galaxies in such a way that it provides the overall motion of stars integrated along the line of sight at each position across the image. Thus, we can get kinematic maps of each galaxy."
The researchers then build models for each galaxy by superposing stars on different types of orbits. By constraining the model with the observed image and kinematic maps, they can find out the amount of stars moving on different types of orbits in each galaxy. They call it the stellar orbit distribution, which is described as the probability of orbits with different circularity (a parameter used to characterize the different types of orbits -for instance, our sun in the Milky Way disk is moving on a near circular orbit with near maximum circularity, while more random-motion dominated orbits have smaller circularity).
For this study, the team has built models for all 300 galaxies and found the stellar orbit distribution of those galaxies, representative of the general properties of galaxies in the local universe.
The maps show changes in galactic orbit distribution depending on the total stellar mass of the galaxies. The ordered-rotating orbits are most prominent in galaxies with total stellar masses of 10 billion solar masses, and least important for the most massive ones. Random-motion orbits unsurprisingly dominate the most massive galaxies (more than 100 billion solar masses). "This is the first orbit-based mass sequence across all morphological types. It includes flourishing information of a galaxy's past, basically whether it had been a quiet succession of only smaller mergers or shaped by a violent major merger. Further studies are needed to understand the details", says Glenn van de Ven (ESO).
The researchers had found a new and accurate method of reading off a galaxy's history – and their survey with its data sets for 300 galaxies turned out to be the largest existing library of galaxy history books.
"This work highlights the importance of integral field spectroscopy and, in particular, of large-scale surveys such as the CALIFA project. The significant contribution of what we call "hot" orbits, a mixture of rotation and random movements of the stellar component, poses important challenges to cosmological models of galaxy formation and evolution," says Rubén García Benito, a researcher at the Institute of Astrophysics of Andalusia (IAA-CSIC) participating in the project.
Given that CALIFA’s selection function allows the correction of the sample to volume averages, their results represent an observationally-determined orbit distribution of galaxies in the present-day universe. They lend themselves thus to direct comparison with samples of cosmological simulations of galaxies in a cosmological context. In this sense, these results open a quantitative way –and at the same time a qualitatively new window– for comparing galaxy simulations to the observed galaxy population in the present-day universe.
L. Zhu et al. "The stellar orbit distribution in present-day galaxies inferred from the CALIFA survey". Nature Astronomy.
The German-Spanish Calar Alto Observatory is located at Sierra de los Filabres, north of Almería (Andalucía, Spain). It is jointly operated by the Instituto Max Planck de Astronomía in Heidelberg, Germany, and the Instituto de Astrofísica de Andalucía (CSIC) in Granada, Spain. Calar Alto has three telescopes with apertures of 1.23m, 2.2m and 3.5m. A 1.5m aperture telescope, also located at the mountain, is operated under control of the Observatorio de Madrid.
COMMUNICATION – CALAR ALTO OBSERVATORY
prensa @ caha.es 958230532
|
Astronomy Picture of the Day
Discover the cosmos!Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2011 August 8
Explanation: What is causing these dark streaks on Mars? A leading hypothesis is flowing — but quickly evaporating — water. The streaks, visible in dark brown near the image center, appear in the Martian spring and summer but fade in the winter months, only to reappear again the next summer. These are not the first markings on Mars that have been interpreted as showing the effects of running water, but they are the first to add the clue of a seasonal dependence. The above picture, taken in May, digitally combines several images from the the HiRISE instrument on the Mars Reconnaissance Orbiter (MRO). The image is color-enhanced and depicts a slope inside Newton crater in a mid-southern region of Mars. The streaks bolster evidence that water exists just below the Martian surface in several locations, and therefore fuels speculation that Mars might harbor some sort of water-dependent life. Future observations with robotic spacecraft orbiting Mars, such as MRO, Mars Express, and Mars Odyssey will continue to monitor the situation and possibly confirm — or refute — the exciting flowing water hypothesis.
|
Based on the Jewish Sages.
1. Shavuot (Pentecost) was, originally, an agricultural holiday, celebrating the first harvest/fruit by bringing offerings (Bikkurim-ביכורים) to the Temple in Jerusalem. Following the destruction of the second Temple and the resulting exile in 70 AD – which raised the need to entrench Torah awareness in order to avoid spiritual and physical oblivion – Shavuot became a historical/religious holiday of the Torah. The Torah played a key role in shaping the U.S. Constitution and the American culture, as well as the foundations of Western democracies.
Shavuot is celebrated by decorating homes and houses of worship with Land of Israel-related crops and flowers, demonstrating the 3,500 year old connection between the Land of Israel (pursued by Abraham), the Torah of Israel (transmitted by Moses) and the People of Israel (united by David). Shavuot is the holiday of humility, as befits the Torah values, Moses (“the humblest of all human beings), the humble Sinai desert and Mt. Sinai, a modest, non-towering mountain. Abraham, David and Moses are role models of humility and their Hebrew acronym (Adam – אדמ) means “human-being.” Humility constitutes a prerequisite for studying the Torah, for constructive human relationships and a prerequisite to effective leadership.
Shavuot – a spiritual holiday – follows Passover – a national liberation holiday: from physical liberation (the Exodus) to spiritual liberation/enhancement (the Torah), in preparation for the return to the Homeland.
2. The holiday has 7 names: The fiftieth (חמישים), Harvest (קציר), Giving of the Torah (מתן תורה), Shavuot (שבועות), Offerings (ביכורים), Rally (עצרת) and Assembly (הקהל). The Hebrew acronym of the seven names is “The Constitution of the Seven” – חקת שבעה.
Shavuot reflects the centrality of “seven” in Shavuot and Judaism. The Hebrew root of Shavuot (שבועות) is the word/number Seven (שבע – Sheva), which is also the root of “vow” (שבועה – Shvuah), “satiation” (שובע – Sova) and “week” (שבוע – Shavuah). Shavuot is celebrated 7 weeks following Passover. God employed 7 earthly attributes to create the universe (in addition to the 3 divine attributes). The Sabbath is the 7th day of the Creation in a 7 day week. The first Hebrew verse in Genesis consists of 7 words. The 7 beneficiaries of the Sabbath are: you, your son and daughter, your male and female servants, your livestock and the stranger. God created 7 universes – the 7th hosts the pure souls, hence“Seventh Heaven.” There are 7 compartments of hell. There are 7 basic human traits, which individuals are supposed to resurrect/adopt in preparation for Shavuot. 7 key Jewish/universal leaders – Abraham, Isaac, Jacob, Moses, Aharon, Joseph and David – are commemorated as distinguished guests (Ushpizin in Hebrew) during the Tabernacle holiday, representing the 7 qualities of the Torah. 7 generations passed from Abraham to Moses. There are 7 species of the Land of Israel (barley, wheat, grape, fig, pomegranate, olive and date/honey. In Hebrew, number 7 represents multiplication (שבעתיים–Shivatayim. Grooms and Brides are blessed 7 times. There are 7 major Jewish holidays (Rosh Hashanah, Yom Kippur, Tabernacles, Chanukah, Purim, Passover and Shavuot); 7 directions (north, south, west, east, up, down, one’s inside); 7 continents and 7 oceans and major seas in the globe; 7 world wonders; 7 notes in a musical scale; 7 days of mourning over the deceased; 7 congregants read the Torah on each Sabbath; 7 Jewish Prophetesses (Sarah, Miriam, Devorah, Chana, Abigail, Choulda and Esther); 7 gates to the Temple in Jerusalem; 7 branches in the Temple’s Menorah; and 7 Noah Commandments. Moses’ birth and death day was on the 7th day of Adar. Jethro had 7 names and 7 daughters. Joshuaencircled Jericho 7 times before the wall tumbled-down. Passover and Sukkot (Tabernacles) last for 7 days each. The Yom Kippur prayers are concluded by reciting “God is the King” 7 times. Each Plague lasted for 7 days. Jubilee follows seven 7-year cycles. According to Judaism, slaves are liberated, and the soil is not-cultivated, in the 7th year. Pentecost is celebrated on the 7th Sunday after Easter.
3. Shavuot is celebrated 50 days following Passover, the holiday of liberty. The Jubilee– the cornerstone of liberty and the source of the inscription on the Liberty Bell (Leviticus 25:10) – is celebrated every 50 years. Judaism highlights the constant challenge facing human beings: the choice between the 50 gates of wisdom (the Torah) and the corresponding 50 gates of impurity (Biblical Egypt). The 50th gate of wisdom is the gate of deliverance. The USA is composed of 50 states.
4. Shavuot sheds light on the unique covenant between the Jewish State and the USA: Judeo-Christian Values. These values impacted the world view of the Pilgrims, the Founding Fathers and the US Constitution, Bill of Rights, Separation of Powers, Checks & Balances, the abolitionist movement, etc. John Locke wanted the 613 Laws of Moses to become the legal foundation of the new society established in America. Lincoln’s famous 1863 quote paraphrased a statement made by the 14th century British philosopher and translator of the Bible, John Wycliffe: “The Bible is a book of the people, by the people, for the people.”
5. Shavuot is the second of the 3 Jewish Pilgrimages (Sukkot -Tabernacles, Passover and Shavuot – Pentecost), celebrated on the 6th day of the 3rd Jewish month, Sivan. It highlights Jewish Unity, compared by King Solomon to “a three folds cord, which is not quickly broken” (Ecclesiastes 4:12). The Torah – the first of the 3 parts of the Jewish Bible – was granted to the Jewish People (which consists of 3 components: Priests, Levites and Israel), by Moses (the youngest of 3 children, brother of Aharon and Miriam), a successor to the 3 Patriarchs (Abraham, Isaac and Jacob) and to Seth, the 3rd son of Adam and Eve. The Torah was forged in 3 manners: Fire (commitment to principles), Water (lucidity and purity) and Desert (humility and faith-driven defiance of odds). The Torah is one of the 3 pillars of healthy human relationships, along with labor and gratitude/charity. The Torah is one of the 3 pillars of Judaism, along with the Jewish People and the Land of Israel.
|
Ways and development of Holocaust literature
The Diary of Anne Frank
Charlotte Delbo: ‘Auschwitz and After’
The second generation represented by Art Spiegelman’s “Maus I & II”
Half a century after the last liberation of the death camps in 1945, which were located in a vast part of Europe, it is not just scientists and historians who are still interested in the Holocaust, one of the most traumatic events of modern European history. For the rest of us, Holocaust literature is seemingly a helpful method to reveal testimonies and survivor experiences. Thus, this topic has reached a certain status in literature. Today, a huge variety of texts deal with the Holocaust in multi-faceted ways, which cover nearly all literary genres. This essay will primarily concentrate on the works of Anne Frank (‘A Diary of a Young Girl’), Charlotte Delbo (‘Auschwitz and After’) and Art Spiegelman (’The Complete Maus’). The second focus, then, will be on Primo Levi’s ‘The Drowned and the Saved’, who was also studied on the module. These texts are outstanding and inimitable in how they treat the Holocaust, how they have reached people’s hearts and minds, and how other people began to deal with the happenings of these dreadful times after their publication. All texts represent examples of different literary genres like Anne Frank’s diary, or Art Spiegelman’s comic book. Charlotte Delbo’s work combines three types of literature in one masterpiece, namely prose, poetry and drama; whereas Levi’s account is a more or less philosophical analysis of the question why all this could happen. However, reading such literature does not automatically imply that the Holocaust in itself can fully be understood. On the contrary, it can only provide a way of approaching the circumstances, which millions of prisoners endured. Hence, many Holocaust survivors tried to use the art of writing to overcome the terrifying things they had seen and - most of all - the things they had to endure physically and psychologically in the concentration and death camps, or in the Jewish ghettos, and from which they had and still continued to suffer. They had to struggle between the desire to forget, but yet face the memory every day, and the impulse to remember, uncover, and record every detail of its reality. To speak about the unspeakable seemed impossible. “Bearing witness, therefore, was not likely to be the first thing on the inmate’s mind”.
How was it that not just those who suffered under Hitler’s regime, but the second generation, their children, were able to find the will to write down their testimonies? Considering the huge variety of Holocaust texts, available nowadays in nearly every common library, and the psychological obstacles of giving testimony, what was it that gave them their inspiration and from where did it come? Yet, when we talk about inspiration in literature, the contextual meaning of the term is “the process of having one’s mind or creative abilities stimulated, especially in […] literature”. To think about the Holocaust with all its destruction and the mass murder of millions of Jews, war and political prisoners, as well as deemed social inferiors, and yet to think about creative stimuli seems to be contradictory and, moreover, to evoke protest. In fact, people nowadays who still try to understand the Holocaust by studying Holocaust literature together with the survivors may find the term ‘inspiration’ inappropriate or degrading for what had actually happened behind the curtains of injustice. As Elie Wiesel pointed out correctly, ‘Literary Inspiration’ in combination with Holocaust literature might thus arouse false inferences. According to the actual definition of the term ‘inspiration’, it seems ridiculous that a mind like that of Charlotte Delbo’s or Anne Frank’s should have been driven by experiences like thirst, fear and, on the whole, the struggle to survive, which influenced them presently or posthumously whilst writing. Accordingly, such traumatising effects may neither evoke fantastic pictures of the past, nor recall any beautiful events in or around the camps. Thus, authors of Holocaust literature have probably not been inspired in a positive or enhanced manner, but in a very destructive, negative sense. They obviously developed a certain need to inform mankind of the horrors and destruction of the Nazi regime. An experience like genocide, which is “etched on [a survivor’s memory so] that [he or she] cannot forget one moment of it”, only drives an author like Charlotte Delbo to make people aware that this monstrous event should never be repeated.
This term paper, consequently, argues that Holocaust literature is full of literary inspirations and a gift to everyone who wants to learn about this dreadful event. Consequently, since Holocaust literature has already become an independent genre, this type cannot be seen as a simple matter of literature. Moreover, the association with the idea that extermination camps like Auschwitz, Treblinka, or Belzec ended up in fantasy or beauty, as Elie Wiesel questions in his statement, is wrong. On the contrary, Holocaust literature, and thus the art of writing in itself, requires fantasy and lives from inspirations, even though these effects stem from rather negative and destructive memories, as this term paper is going to show.
Ways and development of Holocaust literature
Nevertheless, we have to inquire into the detail, why it seems wrong to associate Holocaust literature with such terms like fantasy, beauty, or inspiration. As discussed before, there must be a certain purpose for the victims of the Nazi hell, which led them to writing. But how were they able to write at all? How were the survivors able to bear witness, after all that had happened to them in the camps? Moreover, all of them had to struggle with the overwhelming memory, a memory of dreadful experiences which they could never drive out of their minds, and thus the suffering went on even after Auschwitz or other camp experiences. This memory also affected their use of language, even the most ordinary of words. The omnipresent anxieties like “fate and suffering disappeared from the vocabulary of Auschwitz, as did death itself, to be replaced by the single fear, shared by all, of how one would die.” Since this language barrier hindered the Holocaust survivors from telling, because nobody could understand them, bearing witness first of all became a desire of secondary importance. Yet, according to Primo Levi’s account in ‘The Drowned and the Saved’, the survivors cannot be the real witnesses. On the contrary, those who have suffered until their deaths, that is those who ‘drowned’, are the ones who are really able to bear witness. In his vision, “[t]hey are the rule, [writers of Holocaust literature] are the exception.” Yet, why should those, who are dead now, who are not able anymore to tell what happened to them, be the real witnesses? Their testimony cannot be heard anymore. Consequently, those who have survived are required to tell. However, people did not believe the stories these ghostly figures returning from the camps had to tell. Very few survivors felt the desire to talk and even less to write about their experiences. A crucial turning point was made, when after the end of the war many Hitler supporters were still convinced that nothing would happen to them, and nobody would believe the stories of those skinny and grey looking survivors of the concentration camps. Suddenly, many who remained mute decided to break their silence. To bear witness and to tell the truth, to make people believe those horrific events, which happened behind the walls and wires of Auschwitz and other extermination or concentration camps, should become a kind of rebellion against the Nazi atrocities after the event. In order that the annihilation of the Jews should not be considered as a victory for Hitler and his henchmen, many Holocaust survivors were convinced that being silent is the wrong way to overcome this tragedy. Notwithstanding that this set the ball rolling, it was anything but easy to speak about the horrors the victims of genocide had to go through. Thus, to testify on paper and to publish their stories was seen as a new method of defence. The art of writing, which was at first rejected by those who were able to escape the horror, was suddenly rediscovered. Many later authors of Holocaust literature, like Adorno, for instance, began to “recogniz[e] that the alternative to art is silence and silence would have given Hitler his ultimate triumph.”
The Diary of Anne Frank
But there are also other motives, which might release the tension established by the things they experienced. Moreover, writing is a well-known therapy in helping people deal with psychological problems, from which all survivors suffered. The pain could never be eased. In fact, the “Auschwitz memory remained” for Charlotte Delbo. As for many other survivors, the experiences could not be left behind, in Auschwitz or Ravensbrueck, Dachau, or Sobibor. On the contrary, they were always present and traumatising, and, thus, the witnesses had to find a way to live with the ever-present memory.
Taking ‘The Diary of a Young Girl’ into consideration, we have to bear in mind Anne’s age at first. She was about 13 years old, when she started keeping her diary. In the days of Anne Frank, most Holocaust testimonies were written by men. Those written by women were ignored by scholars to a large extent. Many women wrote diaries and letters, though, which still exist today. Since Anne was a teenager at the time, she had to undergo the ups and downs of puberty. Moreover, as it was time to go into hiding, she had to deal with these things on her own. She found it difficult to confide in her friends from school, but friends are crucial at that age. So she writes: “Yes, paper does have more patience, and since I’m not planning to let anyone else read this stiff-backed notebook grandly referred to as a ‘diary’, unless I should ever find a real friend, it probably won’t make a bit of difference.” Anne did not get along very well with her mother, but their relationship was not necessarily hostile. Interestingly, she distances herself from the traditional role of woman- and motherhood, while she is observing her mother in the Annexe. Nevertheless, Anne does not reject motherhood, but searches for a possibility to combine it with a career, for women should play a wider role in society. Her mother was just a mother and a housewife, but her father, on the other hand, managed to have a career as well as being a good father. Though, one of the most important things for Anne is to lead an independent life. In so far, Anne’s situation and attitude can be compared to those of Virginia Woolf in ‘A Room of One’s Own’. Woolf demands more rights for women writers, so that they were able to develop themselves freely, and shake off the boundaries of archaic female duties. It cannot be denied, though, that Anne seems to follow the demands of Virginia Woolf to a great extent, but because of Anne’s age, we cannot hope for any influence from Woolf. Interestingly, however, the time gap between both works is extremely small, for ‘A Room of One’s Own’ was first published in 1929. Thus, Anne shows how sociocritical she had already become at that time. Consequently, ‘The Diary of a Young Girl’ is not so much part of Holocaust literature, for it is a story of growing up dealing with common teenage problems. However, it covers the Holocaust inasmuch as this text in part describes the circumstances, in which Anne and her family had to live. It shows what it was like to be a fugitive and to live in hiding, which Anne’s mother calls “the art of living”. She explains her innermost feelings to someone, who cannot help her out of this misery. She calls this ‘person’ in her diary ‘Kitty’, whom she considers as a new friend. Kitty is a substitute friend as well as an ideal mother, and most of all Kitty never scorns her. Her diary is her confidant, in whom she can always trust. Those people who once kept a diary may know how helpful and trustworthy such a mute companion can be. Moreover, due to the cathartic style of her writing, the reader feels directly addressed and forced to help, because Anne’s innermost desires and fantasies are revealed. The diary is also a method for her to begin a dialogue. She is searching this dialogue throughout her time in the Annexe; it is, on the one hand, a way of satisfying her need to communicate, but, on the other hand, also a search for being understood and accepted. Apparently, it is a refuge for her to withdraw into herself from those inner doubts concerning her family and her problems with her own body because of the onset of puberty. Anne uses her diary to relieve pressure and to prevent her from exploding, as young people are full of emotions, which surface in various ways. Hence, her inspirations for writing this diary derive from positive as well as negative experiences in the Annexe. The decision to publish her diary, however, was as a result of a Dutch radio broadcast announcement, in which people were asked to write down their war experiences in diaries and letters in order to collect and preserve them after the war for later generations. Hence, it follows that Anne has written with a certain purpose, namely to inform the people of the outside world, outside the Annexe, how she and her family, as well as the van Daans and Alfred Dussel, tried to cope with the situation. These, however, were not the original names of the people sharing the secret hiding place with the Frank family. Since Anne “protected the[ir] identity […], she was able to write truthfully and without fear of reproach.” She even revised her regular accounts in order to publish her diary after the war; she desired to become a writer, although she doubted her talent. Consequently, ‘The Diary of a Young Girl’ is far from being simply a matter of literature, as Elie Wiesel questions. On the contrary, Anne’s life and her emotional world are tied to her diary. If the diary were lost, she, too, would be lost.
Reference Guide, p. 339
Oxford Dictionary, p. 591
Delbo, p. 142
ibid, p. xi
Langer, p. 604
Bartov, p. 229
Levi, pp. 63/ 64
Staging the Holocaust, p. 1
Levi, p. ix
Delbo, p. 228
Internet resource: Cohen
Delbo, p. xi
Frank, p. 6
Woolf, ch. V, particularly p. 126
Frank, p. 129
ibid, p. 7
ibid, p. 84/ 85
ibid, p. 242
Reference Guide, p. 90
Frank, p. 255
- Quote paper
- Nadja Winter (Author), 2004, The Holocaust - a Literary Inspiration?, Munich, GRIN Verlag, https://www.grin.com/document/24928
|
Social liberalism is different from classical liberalism: it thinks the state should address economic and social issues. Examples of problems the state might work on include unemployment, health care, and education. For example, there was no state support for general education in Britain before about 1870. Support for poor people came from private charities, and the church.
A commitment to a fair distribution of wealth and power led gradually (over about a century) to support public services as ways of fairly distributing wealth. Democracy improved by increasing the franchise (the right to vote) to all adults. Some countries which did not have democracy now do have it.
According to social liberalism, the government should also expand civil rights. Under social liberalism, the good of the community is viewed as harmonious with the freedom of the individual. Many parts of the capitalist world have used social liberal policies, especially after World War II.
John Rawls's published a book called "A Theory of Justice" in 1971, he suggested that ‘new liberalism’ is focused upon developing a theory of social justice. This idea of liberalism leads to issues of sharing, equality, and fairness in social and political circumstances. It is controversial because it attacks neoliberalism.
- Howarth, David 2007. What is social liberalism? In Reinventing the state: social liberalism for the 21st century. Duncan Brack, Richard S. Grayson, David Howarth (eds). Politico Publishing. ISBN 978-1-84275-218-0.
- Ruggiero, Guido De 1959. The history of European liberalism, 155–157.
- Faulks, Keith 1999. Political sociology: a critical introduction. Edinburgh University Press, 7p3
- Rawls J. 1999. A theory of justice. Cambridge, MA: Harvard University Press.
- Slomp, Hans (2000). European politics into the twenty-first century: integration and division. Westport: Greenwood. ISBN 0275968146.
- Hombach, Bodo (2000). The politics of the new centre. Wiley-Blackwell. ISBN 9780745624600.
|
Every summer, millions of people head to the coast to soak up the sun and play in the waves. But they aren’t alone. Just beyond the crashing surf, hundreds of millions of tiny sea urchin larvae are also floating around, preparing for one of the most dramatic transformations in the animal kingdom.
Scientists along the Pacific coast are investigating how these microscopic ocean drifters, which look like tiny spaceships, find their way back home to the shoreline, where they attach themselves, grow into spiny creatures and live out a slow-moving life that often exceeds 100 years.
“These sorts of studies are absolutely crucial if we want to not only maintain healthy fisheries but indeed a healthy ocean,” says Jason Hodin, a research scientist at the University of Washington’s Friday Harbor Laboratories.
Sea urchins reproduce by sending clouds of eggs and sperm into the water. Millions of larvae are formed, but only a handful make it back to the shoreline to grow into adults.
It may sound like a risky life strategy. But in the ocean, it works. Nearly every animal that lives along the shore — from mussels to sea stars to some species of fish — sends its young on an open ocean journey before they return home to grow into adults along the shoreline.
“One of the big challenges in understanding marine life cycles is understanding how larvae do this,” says Hodin, who is working with a research team that is trying to learn how purple sea urchins find their way home.
“How do they go from this vast open ocean and make their way back to very specific shoreline habitat?”
Hodin said it is similar to a housing search. When you are looking for a place to live, the first step is to decide on a neighborhood. Hodin, along with Brian Gaylord of the UC Davis Bodega Marine Laboratory in Bodega Bay and Matthew Ferner of the San Francisco State University Romberg Tiburon Center for Environmental Studies in Tiburon, recently found that that waves — specifically the strong, thrashing turbulence found in the urchin’s intertidal habitat — play a role in the larval urchin’s journey home.
“We think that turbulence is basically an indicator that they’re in a good neighborhood,” he says.
In fact, the team discovered that the crashing waves actually make the larval urchin — called a pluteus — develop faster.
“The turbulence acts as a primer,” Gaylord says. “It sort of pushes them into this settlement process earlier than we knew they would do this.”
Once the pluteus has found a neighborhood to settle in, it finds a home by using more local cues, like the presence of other adults or certain types of algae. It will then undergo a complete transformation.
“If you take a look at these marine larvae, they look literally nothing like the adults,” says Hodin. “In a sea urchin or a sea star, they even have a totally different body symmetry.”
The larval urchin drifts in the ocean currents as a member of theplankton for a month or longer. How does it change from a tiny drifter the size of a grain of sand to a bottom-dwelling ball of spines?
Halfway through its voyage out to sea, “something very interesting happens,” Hodin says. “They do a little trick to try to make that transformation from being a larva to being a juvenile happen faster.” They begin to grow the juvenile urchin form — a miniature adult — inside of the larva’s body.
When it reaches the rocky shore, the juvenile urchin bursts out.
“It sticks its little tube feet out of the side of the little pluteus larva swimming around, and it grabs hold of the rocks or the bottom of the seafloor,” says Nat Clarke, a graduate student in Chris Lowe’sLaboratory at Stanford’s Hopkins Marine Station in Pacific Grove.
Within hours, it begins to resemble the purple, spiky sea urchin that beachgoers regularly see in tidepools and along the ocean bottom.
Urchins commonly live for decades. Some can live for more than a century. Scientists know this because nuclear testing in the 1950s left trace amounts of radioactive material in red sea urchins’ shells, enabling researchers to calculate the sea urchins’ age.
Adult urchins spawn throughout their lives, sending their young out to sea just as their own parents did. Somewhat like salmon, urchins may come back to the place they were born, although scientists aren’t sure yet how or why.
“Research lately has been very, very strongly suggesting that most larvae come back to somewhere near the same shoreline that their parents came from,” Hodin says. “It’s something that people didn’t realize 15 to 20 years ago. There’s a lot more connectivity between the shoreline and the waters offshore where the babies are.”
This article is reproduced with permission from KQED Science. It was first published on August 23, 2016. Find the original story here.
|
Photosynthesis is the process by which plants use sunlight to produce energy. The process can be a challenging topic, difficult to teach, unless visual activities are used. Visual activities show children the way photosynthesis works. These projects can vary from the simplest drawing activity to a full science experiment in which growing plants are used. These activities can be used in the classroom environment, but are simple enough to do at home too.
Start by getting the students to draw a flower on a piece of paper. Ask them to continue their drawing by adding the sun, water, soil and rain. Next, get them to write carbon dioxide and draw an arrow towards the flower. On the opposite side, write the word oxygen and draw another arrow, but away from the flower this time. At the bottom of the plant, draw a sugar cube. Make sure to explain the process of photosynthesis as they are drawing as they go along.
Give each student two paper cups with a quick growing plant potted inside. Ask them to place one cup in a dark room and the other in the sunlight on a windowsill. Each child needs to water both flowers throughout the week. After a week has passed, get the children to bring over both their plants and ask them to evaluate the two. Explain that the plant had a sunlight deficiency while in the dark room so therefore photosynthesis wasn’t possible and as a result the plant looks limp and is dying.
Have the students place a healthy, growing, leafy plant by the window for several days. Get the students to take construction paper and tape it over some of the leaves. Then after several days, get the students to remove the tape. The leaves covered in tape will be darker. Chlorophyll is what gives leaves their color and without sunlight, the leaves will lose that color.
Photosynthesis Chemical Experiment
Purchase some small plants and get your students to put them in test tubes filled with water. Plug the opening of the test tubes. During the next little while, bubbles will appear on the sides of the test tubes. This is a photosynthesis chemical response that shows plants changing carbon dioxide and water into food.
|
06 March 2017
Forests, especially tropical forests, are home to thousands of species of trees—sometimes tens to hundreds of tree species in the same forest—a level of biodiversity ecologists have struggled to explain. In a new study published in the journal Proceedings of the National Academy of Sciences (PNAS), researchers at the International Institute for Applied Systems Analysis (IIASA) and their colleagues in Australia are now providing a first model that elucidates the ecological and evolutionary mechanisms underlying these natural patterns.
“Forests in particular and vegetation in general are central for understanding terrestrial biodiversity, ecosystem services, and carbon dynamics,” says IIASA Evolution and Ecology Program Director Ulf Dieckmann. Forest plants grow to different heights and at different speeds, with the tallest trees absorbing the greatest amounts of sunlight, and shorter trees and shrubs making do with the lower levels of sunlight that filter through the canopy. These slow-growing shade-tolerant species come in an unexpectedly large number of varieties—in fact, far more than ecological models have been able to explain until now.
Traditional ecological theory holds that each species on this planet occupies its own niche, or environment, where it can uniquely thrive. However, identifying separate niches for each and every species has been difficult, and may well be impossible, especially for the observed plethora of shade-tolerant tropical trees. This raises the fundamental question: are separate niches really always needed for species coexistence?
In the new study, the researchers combined tree physiology, ecology, and evolution to construct a new model in which tree species and their niches coevolve in mutual dependence. While previous models had not been able to predict a high biodiversity of shade-tolerant species to coexist over long periods of time, the new model demonstrates how physiological differences and competition for light naturally lead to a large number of species, just as in nature. At the same time, the new model shows that fast-growing shade-intolerant tree species evolve to occupy narrow and well-separated niches, whereas slow-growing shade-tolerant tree species have evolved to occupy a very broad niche that offers enough room for a whole continuum of different species to coexist—again, just as observed in nature.
Providing a more comprehensive understanding of forest ecosystems, the resulting model may prove useful for researchers working on climate change and forest management. Dieckmann says, “We hope this work will result in a better understanding of human impacts on forests, including timber extraction, fire control, habitat fragmentation, and climate change.”
The study was led by Daniel Falster at Macquarie University in Australia, who was a participant in the 2006 IIASA Young Scientists Summer Program.
Falster DS, Brännström Å, Westoby M, Dieckmann U (2017). Multi-trait successional forest dynamics enable diverse competitive coexistence. Proceedings of the National Academy of Sciences (PNAS). pure.iiasa.ac.at/14354
Last edited: 06 June 2017
International Institute for Applied Systems Analysis (IIASA)
Schlossplatz 1, A-2361 Laxenburg, Austria
Phone: (+43 2236) 807 0 Fax:(+43 2236) 71 313
|
What It Is
A pelvic ultrasound is a safe and painless test that uses sound waves to make images of the pelvis.
During the examination, an ultrasound machine sends sound waves into the pelvic area and images are recorded on a computer. The black-and-white images show the internal structures of the pelvis, such as the bladder, and in girls, the ovaries, uterus, cervix, and fallopian tubes.
Why It's Done
Doctors order a pelvic ultrasound when they're concerned about a problem in the pelvis.
A pelvic ultrasound can be used to determine the shape, size, and position of organs in the pelvis, and can detect tumors, cysts, or extra fluid in the pelvis, and help find the cause of symptoms such as pelvic pain, some urinary problems, or abnormal menstrual bleeding in girls.
Pelvic ultrasounds are used to monitor the growth and development of a baby during pregnancy and can help in diagnosing some problems with pregnancy.
Usually, you don't have to do anything special to prepare for a pelvic ultrasound, although the doctor may ask that your child drink lots of fluids before the exam so that he or she arrives with a full bladder.
If the ultrasound is done in an emergency situation, your child may be given fluids through an intravenous catheter (IV) or through a urinary catheter to help fill the bladder.
You should tell the technician about any medications your child is taking before the test begins.
The pelvic ultrasound will be done in the radiology department of a hospital or in a radiology center. Parents are usually able to accompany their child to provide reassurance.
Your child will be asked to change into a cloth gown and lie on a table. The room is usually dark so the images can be seen clearly on the computer screen. A technician (sonographer) trained in ultrasound imaging will spread a clear, warm gel on the lower abdomen over the pelvic area, which helps with the transmission of the sound waves.
The technician will then move a small wand (transducer) over the gel. The transducer emits high-frequency sound waves and a computer measures how they bounce back from the body. The computer changes those sound waves into images to be analyzed.
Sometimes the doctor will come in at the end of the test to meet your child and take a few more pictures. The procedure usually takes less than 30 minutes.
What to Expect
The pelvic ultrasound is painless. Your child may feel a slight pressure on the lower belly as the transducer is moved. Ask your child to lie still during the procedure so the sound waves can produce the proper images. The technician may ask your child to lie in different positions or hold his or her breath briefly.
Babies might cry in the ultrasound room, especially if they're restrained, but this won't interfere with the procedure.
Getting the Results
A radiologist (a doctor who's specially trained in reading and interpreting X-ray, ultrasound, and other imaging studies) will interpret the ultrasound results and then give the information to the doctor, who will review them with you. If the test results appear abnormal, the doctor may order further tests.
In an emergency, the results of an ultrasound can be available quickly. Otherwise, they're usually ready in 1-2 days. In most cases, results can't be given directly to the patient or family at the time of the test.
No risks are associated with a pelvic ultrasound. Unlike X-rays, radiation isn't involved with this test.
Helping Your Child
Some younger kids may be afraid of the machinery used for the ultrasound. Explaining in simple terms how the pelvic ultrasound will be conducted and why it's being done can help ease any fear. You can tell your child that the equipment takes pictures of his or her belly. Encourage your child to ask the technician questions and to try to relax during the procedure, as tense muscles can make it more difficult to get accurate results.
If You Have Questions
If you have questions about the pelvic ultrasound, speak with your doctor. You can also talk to the technician before the exam.
Reviewed by: Yamini Durani, MD
|
Degrees, radians, retinal size and sampling
Bruno A. Olshausen
Psych 129 - Sensory processes
When we open our eyes, an image of the world is projected onto the retinae. The intensity at each point in the image is converted to voltage by photoreceptors arrayed across each retina, and thus begins our perception of the world. In order to get a better understanding of this process, it is helpful to know how many photoreceptors or ganglion cells are sampling a given object or region in the image. This tells us the resolution at which an object is represented, which in turn limits how much of its structure can be perceived (analogous to having a high-resolution or low-resolution computer display). Here, we show how to compute the resolution at which an object is represented on the retina, and we discuss its implications for perception.
Degrees and radians
A convenient way to measure the distance between two points along the retina, or the distance between two points in visual space, is in terms of the angle between them. This angle is constructed by drawing a line from one point to the center of the lens/cornea, and then back to the other point, as shown in Figure 1.
Figure 1: Measuring distance along the retina or in visual space in terms of angle.
The reason this is a convenient measure of distance is because the size of an object on the retina and its size in the visual world are equivalent in terms of angle, so it saves us alot of conversion back and forth. If an object subtends 10 of visual space, it also subtends 10 on the retina - simple as that.
How do you compute the angle subtended by an object in the visual world? One way to do it exactly is with trigonometry, as shown in Figure 2. Take one-half the object size divided by the distance from the eye, and the inverse tangent of this ratio will give you half the angle. But there is a simpler, approximate way to do it using simple geometry - i.e., in terms of radians. Measuring an angle in radians basically tells you what fraction (or multiple) of the radius the object subtends in terms of arc length. Shown in Figure 3 is an arc length of one radian, equal to the length of the radius. If the object is small relative to the radius, then dividing its size by the radius will give an
Figure 2: Trigonometric method for computing angle subtended by an object.
Figure 3: Geometric method for computing angle subtended by an object.
approximately correct angle in radians, even though the object doesnt curve like the arc depicted in the figure. Thus, in terms of the notation in Figure 2 we have:
angle in radians .
Once you have the angle in radians, you simply convert to degrees by multiplying by 180/:
angle in degrees = angle in radians deg/rad.
To get a feel for the sizes of various object in degrees, the moon subtends about 0.5 degrees, your thumb subtends about two degrees (when held at arms length), and your average computer monitor subtends about 30 degrees (assuming you are viewing it from two feet away).
Once you have calculated the angle subtended by an object, now what do you do? Calculate resolution! But in order to do this, we need to know about the density of the retinal sampling lattice.
There are two crucial stages of sampling that take place in the retina. One is via the photoreceptors (rods and cones) that initially transduce light into voltage and electrochemical signals. The other is via the retinal ganglion cells which sample the outputs of the photoreceptors (after being processed by the horizontal, bipolar, and amacrine cells). While the density of photoreceptors declines somewhat with eccentricity, the density of ganglion cells falls off even more sharply. There are about 130 million photoreceptors tiling the retina, and this information is summed into only one million ganglion cells. Thus, there is a net convergence or fan-in ratio of about 100:1. The signals conveyed by the one million ganglion cells are the sole statement that the cortex gets to see from the retinal image. Were they to be tiled evenly over the 2D image, they would provide an equivalent resolution of about 1000x1000 pixels for the entire visual field (which isnt much!). Instead, they are arranged so as to obtain extremely high resolution in the fovea - sampling the photoreceptors in a one-to-one ratio - and then falling off with eccentricity, as depicted in Figure 4.
Figure 4: Retinal ganglion cell and photoreceptor sampling lattices, in one-dimension.
The exact manner in which resolution falls off is such that the spacing between retinal ganglion cells (along one dimension) increases linearly with eccentricity, as shown in Figure 5. This
Figure 5: Ganglion cell spacing as a function of eccentricity.
relation can be described approximately with the function , where is the spacing between adjacent retinal ganglion cells, in degrees, and E is eccentricity, also in degrees. (That this is so is really quite fascinating, from an engineering viewpoint, because it provides scale-invariance. As you look at the center of your hand, the number of ganglion cells falling on each of your finger tips will be about constant, independent of viewing distance. This feature conceivably makes it easier to recognize objects amidst variations in size on the retina.)
Resolution and eccentricity
The number of samples subtending an object is what limits the resolution with which it is perceived. For example, when you look at your thumb just a few inches from your eye, many fine details such as the creases in the skin, hair, and structure of the fingernail can be perceived. Using the relations above, you should be able to deduce that there are about 50 ganglion cells sampling a 1 mm. region of your thumb (along one-dimension) when held 5 inches from your eye. That means each cone is subtending about 20 microns! So, any changes in reflectance occurring at that spatial scale or above can be perceived. If you move your thumb a bit away from the center of gaze, then it is now being sampled by fewer ganglion cells, each of which is summating over a several photoreceptors. Hence, the 20 microns details previously perceived at the center of gaze are being effectively neurally blurred and can not be perceived as a result. With each doubling in eccentricity, the number of samples subtending your thumb (along one-dimension) will be decreasing approximately by one-half. By the time you get to 10 degrees eccentricity, only 1/10 the number of ganglion cells will be subtending your thumb, so only details larger than 200 microns will be perceived.
In the 1970s, Stuart Anstis measured the minimum size that characters need to be printed in order to be perceived, as a function of their eccentricity on the retina. Interestingly, he found that the minimum size scaled linearly with eccentricity, similar to Figure 5, but with a different slope. Roughly, the relation he found was , where S denotes the minimum recognizable size of the character. Referring back to our equation for retinal ganglion cell spacing, this implies that a character needs to subtend a region of approximately 4x4 ganglion cells, independent of where it falls on the retina. (Even more interesting is the fact that George Sperling has independently shown that the efficiency of character recognition, in terms of spatial-frequency content, is maximum at just under 2 cycles/per object. According to the Nyquist sampling theorem then, about 4 samples across would be needed to carry this information. Thus, three independent measurements, together with computational theory, are totally consistent!) Whether this can be generalized to other more complicated shapes, such as faces, has yet to be tested rigorously however.
|
In Computing this week the children have thought carefully about staying safe on the Internet. The children know that the Internet can be used for many things; searching for information, reading stories, listening to music, watching films, searching for images, playing games, shopping, talking to friends.
We talked together about what we thought was important to do to stay safe on the Internet. Here are the things the children came up with;
- Always tell a grown up when using the Internet
- If we hear see a picture or a word that we don’t like tell an adult straight away
- Never give out any personal information (name, age, telephone number or address)
- Treat people like we would like to be treated.
- If something happens that we are unsure about tell an adult.
The children then all signed an agreement to show that they will always try to use the Internet safely.
|
Common Lisp the Language, 2nd Edition
The function write accepts keyword arguments named :pprint-dispatch, :miser-width, :right-margin, and :lines, corresponding to these variables.
When *print-pretty* is not nil, printing is controlled by the `pprint dispatch table' stored in the variable *print-pprint-dispatch*. The initial value of *print-pprint-dispatch* is implementation-dependent and causes traditional pretty printing of Lisp code. The last section of this chapter explains how the contents of this table can be changed.
A primary goal of pretty printing is to keep the output between a pair of margins. The left margin is set at the column where the output begins. If this cannot be determined, the left margin is set to zero.
When *print-right-margin* is not nil, it specifies the right margin to use when making layout decisions. When *print-right-margin* is nil (the initial value), the right margin is set at the maximum line length that can be displayed by the output stream without wraparound or truncation. If this cannot be determined, the right margin is set to an implementation-dependent value.
To allow for the possibility of variable-width fonts, *print-right-margin* is in units of ems-the width of an ``m'' in the font being used to display characters on the relevant output stream at the moment when the variables are consulted.
If *print-miser-width* is not nil, the pretty printer switches to a compact style of output (called miser style) whenever the width available for printing a substructure is less than or equal to *print-miser-width* ems. The initial value of *print-miser-width* is implementation-dependent.
When given a value other than its initial value of nil, *print-lines* limits the number of output lines produced when something is pretty printed. If an attempt is made to go beyond *print-lines* lines, `` ..'' (a space and two periods) is printed at the end of the last line followed by all of the suffixes (closing delimiters) that are pending to be printed.
(let ((*print-right-margin* 25) (*print-lines* 3)) (pprint '(progn (setq a 1 b 2 c 3 d 4))))
(PROGN (SETQ A 1 B 2 C 3 ..))
(The symbol ``..'' is printed out to ensure that a reader error will
occur if the output is later read. A symbol different from ``...'' is
used to indicate that a different kind of abbreviation has occurred.)
|
Dr. Heppner used very sensitive recording equipment to record the low-frequency
sounds made by burrowing earthworms. He found that robins ignored the
Dr. Heppner concluded that robins don't seem to notice the smell of worms
at all. He observed:"Robins nonchalantly ate foods smelling like
rotten eggs, decaying meats, rancid butter, and the absolutely worst smell
of all bad smells: mercaptoacetic acid."
Dr. Heppner didn't even consider the possibility that robins use taste
to find worms. (Robins capture worms before tasting them, and would have
to taste a LOT of dirt to pick out worms using their sense of taste!)
Heppner wondered if robins could feel the vibrations worms make and sense
them that way. He drilled worm-like holes in the ground and placed dead
worms in them. The robins peeked in the holes, found the dead worms, and
ate them readily! Therefore, he concluded that robins do not rely on their
sense of touch to hunt worms.
SIGHT (the conclusion!)
Dr. Heppner supected sight was the most important sense robins use to
find worms. He drilled holes that looked exactly like worm holes. Robins
ignored the holes UNLESS a worm was inside the hole within visual range.
Whether that worm was alive and normal, alive but coated with a bad-smelling
odor, or dead, the robins found the worms and ate them. He concluded that
sight is the key sense robins use to find earthworms.
|
Today I'll be discussing more activities that promote hand separation. Spinning a top is one suggestion. Tong activities are also great to work on developing manipulation skills on the thumb, index, and middle finger side of the hand while working on stability on the pinkie and ring finger side. To prevent a child from trying to use the pinkie and index to help with manipulating the tongs, you can always have her hold a cotton ball or other small item with these fingers while using the tongs. This ensures that the pinky side of the hand is being used for stability!
There are a variety of tongs that you can purchase, or you may already have some of these in your home! You can work on many different concepts with tong activities, such as having the child sort items of the same color, size, or shape. You can also purchase small beads with letters on them and have the child use the tongs to make words out of the letters. A child can stack blocks using tongs and even string beads! Get creative when it comes to using tongs by incorporating them into craft activities or using them to move game pieces! There's no limit to what you can do, and all the while the child is improving hand separation, which can ultimately have a positive impact on fine motor skills, including handwriting!
|
Highlighting can be a great study strategy, especially in the early stages of learning. (It is limited, of course, to worksheets or books students own.) It can help you find information later to review it or make study cards for in-depth study.
Most people think highlighting is easy. However, I have seen many students who don’t understand why they highlight. They also don’t know how. Sometimes, these students highlight almost everything on the page, which defeats the purpose.
If your child does this, here are some steps to take to help him learn how to highlight in a purposeful and meaningful way.
Discuss the following with him:
- Why do we highlight? Lead him to understand that highlighting makes it easy to find the most important information later when he needs it to study. If too much of the page is highlighted, it is not easier to find something. It might even make it more difficult. Highlighting needs to be used carefully and purposefully.
- What information is important? Discuss possibilities such as names of new characters in a fiction book, new vocabulary words, or a brand new concept that seems important. Highlighting can also help when learning how to do a difficult concept like using negative numbers in math. Many students highlight the negative signs when doing algebra because they are important and easy to overlook.
Once your child understands what highlighting is for, the next step is to practice highlighting something specific. For example, when reading a literature book he could highlight the name of new characters introduced in the chapter. Or in a science textbook, he could highlight the vocabulary words (just the word because the definition will probably be nearby). He could highlight key words in the directions given at the beginning of a worksheet. (Circle, solve, check your work, multiply, etc.) Try to find something that is normally difficult for him and use highlighting to make it easier.
I would love to know what study strategy is most helpful for your children. I am always looking for new ideas to try!
More on study skills:
|
with a topic that affects them every day – SOIL! In the videos below, you will get an in-depth look of the world of soil and the important role it plays in our daily lives.
In this virtual field trip students get a real world look into the science behind how produce, like strawberries, peppers and squash are grown and harvested, and how they go from soil to your store.
Give students a deeper understanding of core science concepts as they discover the complex world of soils using these unique and engaging digital explorations.Get Started
Want to help your fellow teachers bring science to life in their own classroom? Download this simple toolkit to spread the word about From the Ground Up: The Science of Soil.Download Toolkit
|
These five factors interact to provide a balance of fuels to supply energy for working muscles. Although the most extensive research on fuel use during exercise has been done on endurance athletes (distance runners, cross country skiers, cyclists, and swimmers), data is also available for athletes in strength and power sports such as ice hockey and for resistance exercise (weight training).
Of all the factors, exercise intensity matters the most in determining which fuel is used for muscle contraction. As the table below shows, carbohydrate is the preferred fuel for high intensity exercise (performed at over 70% of aerobic capacity or VO2max). Link to Exercise Physiology: VO2max.
In this graph, the bread slices represent carbohydrate stores (glucose and glycogen) and the butter pats represent fat stores (fatty acids and triglycerides). At rest and at low exercise intensities (expressed as %VO2max), fat is the predominant fuel. During moderate intensity exercise (you feel the effort but can carry on a conversation while you are moving) fat and carbohydrate contribute equally to fuel muscle contraction. However, as exercise intensity increases to moderately hard or hard (talking while moving is difficult or impossible), a point is reached (ranging from 65-75% VO2max) where carbohydrate becomes the predominant, and then the exclusive, fuel for working muscles. Why? Look below for the answers.
Remember the four major fuels for exercise: muscle glycogen, plasma glucose, muscle triglyceride, and plasma fatty acids. This graph shows how the use of these fuels changes as energy intensity increases. Subjects for this study (Romijn, 1993) were endurance-trained men who had fasted overnight. These data were derived after the individuals exercised for thirty minutes at the given intensities. Note the following changes in substrate contribution to the total energy supply as energy intensity increases from 25% to 65% to 85% of VO2max:
|
In physics, superheating (sometimes referred to as boiling retardation, or boiling delay) is the phenomenon in which a liquid is heated to a temperature higher than its standard boiling point, without actually boiling. This can be caused by rapidly heating a homogeneous substance while leaving it undisturbed (in order to avoid the introduction of bubbles at nucleation sites). Superheated liquids can be stable above their usual boiling point if the pressure is above atmospheric (see superheated water). This article refers only to liquids above their actual boiling point in a metastable state
With the exception of superheated water below the Earth's crust, a superheated liquid is usually the result of artificial circumstances. Being such, it is metastable, and is disrupted once the circumstances abate, leading to the liquid boiling very suddenly and violently (a steam explosion). Superheating is sometimes a concern with microwave ovens, some of which can quickly heat water without physical disturbance. A person agitating a container full of superheated water by attempting to remove it from a microwave could easily be scalded.
Superheating is common when a person puts an undisturbed cup of water into the microwave and heats it. Once finished, the water appears to have not come to a boil. Once the water is disturbed, it violently comes to a boil. This can be simply from contact with the cup, or the addition of substances like instant coffee or sugar, which could result in hot scalding water shooting out. The chances of superheating are greater with smooth containers, like brand-new glassware that lacks any scratches (scratches can house small pockets of air, which can serve as a nucleation point).
Rotating dishes in modern microwave ovens can also provide enough perturbation to prevent superheating.
There have been some injuries by superheating water, like when a person makes instant coffee and adds the coffee to the superheated water. This sometimes results in an "explosion" of bubbles. There are some ways to prevent superheating in a microwave oven, like putting a popsicle stick in the glass, or having a scratched container to boil the water in. However this is very, very rare and can only happen under certain conditions. A foreign object added to the water prior to heating, whether it be a plastic spoon or a salt cube, greatly diminishes the chance of an explosion because it provides nucleation sites.
Superheating also occurs in nuclear reactors and other types of high-temperature steam generators used for producing electricity, and is guarded against when it leads to corrosion or embrittlement of metal pipes.
Magnetrons, such as those used in microwave ovens, can also superheat steam in steam-power or steam-heating circuits, exponentially increasing steam thermal capacity. Advanced theories include powering the magnetron superheating circuit from electricity generated by the waste heat from the main steam circuit, resulting in additional heating BTUs for buildings at zero additional fuel cost or additional fossil fuel pollution.
A commonly mistaken belief is that superheating can only occur in pure substances. This is untrue because nucleation points for boiling do not include solid nucleation centres, but rather, seed-bubbles that occur due to the presence of solid nucleation centres. In other words, if there are solid nucleation centres in a substance (e.g. impure water) but without seed-bubbles (e.g. leaving impure water to stand or boiling it once to rid the water of the bubbles), superheating can occur. It is interesting to note however, that nucleation points for freezing include solid nucleation centres. That is to say, an impure substance cannot undergo supercooling.
Milk and water with starch content do not boil over because of superheating, but rather result in extreme foam buildup. This foam is stabilized by special substances in the liquids and therefore does not burst.
- Urban Legends Reference Pages: Superheated Microwaved Water
- Beaty, William and U. Washington. "Impure water can also undergo superheating". Retrieved 2007-11-24.
- Distilled Water: Myths - Wikipedia
- Video of superheated water in a microwave explosively flash boiling, why it happens, and why it's dangerous.
- A series of superheated water with oil film experiments done in the microwave by Louis A. Bloomfield, physics professor at the University of Virginia. Experiment #13 proceeds with surprising violence.
- Video of superheated water in a pot.de:Siedeverzugnl:KookvertragingNucleation
|
Kentucky and Virginia Resolutions, in U.S. history, resolutions passed in opposition to the Alien and Sedition Acts, which were enacted by the Federalists in 1798. The Jeffersonian Republicans first replied in the Kentucky Resolutions, adopted by the Kentucky legislature in Nov., 1798. Written by Thomas Jefferson himself, they were a severe attack on the Federalists' broad interpretation of the Constitution, which would have extended the powers of the national government over the states. The resolutions declared that the Constitution merely established a compact between the states and that the federal government had no right to exercise powers not specifically delegated to it under the terms of the compact; should the federal government assume such powers, its acts under them would be unauthoritative and therefore void. It was the right of the states and not the federal government to decide as to the constitutionality of such acts. A further resolution, adopted in Feb., 1799, provided a means by which the states could enforce their decisions by formal nullification of the objectionable laws. A similar set of resolutions was adopted in Virginia in Dec., 1798, but these Virginia Resolutions, written by James Madison, were a somewhat milder expression of the strict construction of the Constitution and the compact theory of the Union. The resolutions were submitted to the other states for approval with no real result; their chief importance lies in the fact that they were later considered to be the first notable statements of the states' rights theory of government, a theory that opened the way for the nullification controversy and ultimately for secession.
See E. D. Warfield, The Kentucky Resolutions of 1798 (1887, repr. 1969); J. C. Miller, Crisis in Freedom (1951, repr. 1964).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
|
Fall is coming down the tracks and the asters and goldenrods are taking over the countryside. The two are part of the sunflower family, formerly the compositae, now the Asteraceae. The East End of Long Island is rich in aster and goldenrod species, having more than 20 local species combined. In the world of flowering plants, the sunflower family is the most ubiquitous in species, and one of the reasons for that is the way the different members disperse their seeds.
The common sunflower or common dandelion represents the family well. A single sunflower, yellow on the outside and dark in the middle, is actually a composite of 50 to 100 flowers. The yellow petals around the perimeter are parts of the ray flowers — one yellow petal, or ray, for each floret. The center is made up of disc flowers, each of which produces a seed. The dandelion is a miniature sunflower, but is constructed similarly. We might just as well call sunflowers and other members of the family “superflowers.”
The common sunflower that is so popular at the local garden stands is one of the few members of the family that produces large edible seeds. Of course, to get at the meat, one must break through the test, just as one needs to husk the peanut from the peanut hull in order to eat it. Birds such as chickadees, blue jays, nuthatches, and the like use their beaks and grasping feet to get to the goodies. Humans prefer to buy and eat sunflower seeds already loosed from their shells. In carrying away seeds from the sunflower disc to store or eat perched on a limb, birds drop some along the way. Whether stored or dropped, several such seeds go uneaten and germinate a good distance from the sunflower parent. Such means of spreading one’s kin is not that efficient and, perhaps, why the common sunflower and others in the genus, Helianthus, are few and far apart compared to other members of the family which have seeds that are dispersed by wind.
Asters and goldenrod seeds have a little feathery appendage called a pappus. These pappuses can be in the form of a little parachute, not unlike the feathery appendages of milkweeds, which are not at all closely related to the sunflowers. Such little parachutes like those on the thistles, for example, can waft in a gentle breeze for miles and miles. Since most of these feathery seeds become ripe and are released in the fall when southwesterlies and westerlies prevail, seeds from western Long Island and even from west of New York City can make it all the way to Montauk. It may take several generations or only one to get there from 100 miles away or more. Over the very long haul, the movement of storms from the west, with their attendant westerlies, may account for the relatively greater number of asters and goldenrods here than there.
Cottonwood trees also produce seeds with “sails,” but their seeds are released in the spring. Maple tree seeds come in pairs, attached to each other in a “samara” with a paravane on each side, when they drop from, say, 50 feet up, the samaras spin like the rotors of helicopters and the seeds motor away from the parent. Ashes and lindens also produce seeds with wings, one to a seed, which helps them disperse their kind well beyond their trunks.
Not so with the oaks, hickories, and walnuts. Like apples, they don’t fall far from the tree, which would be a very poor means of dispersal if it weren’t for squirrels, chipmunks, blue jays, and other mammals and birds that harvest the nuts and store them in caches or singly to get them through the winter. A lot of these nuts don’t get eaten and germinate come the following spring. The acorn may not fall far from the tree, but it can be picked up and stored a great distance from the tree. This kind of passive dispersal works better for oaks than for hickories and walnuts, as there are many more oak species in North America than walnut species.
The Rosaceae is one of the largest families after the Asteraceae. It contains the roses, raspberries, strawberries, crabapples, beach plums, cherries, pears, hawthorns, and many other fruit-bearing trees. The fruits the rose family species produce are pulpy and mostly edible, but the seeds in the center are practically indigestible. Just as we stole apples and carried them far away to eat them as kids, spitting out the seeds as we did, the birds and mammals that eat the fruit, defecate the seeds in a myriad of places. Seeds that pass through a digestive track germinate faster than seeds that don’t. Black cherries, also known as wild cherries, perhaps best demonstrate this kind of dispersal.
One can hardly find a wood, heathland, old field, or scrubland without black cherries. Their seeds get around!
The Ericaceae, or heaths, such as huckleberries, blueberries, cranberries, and so on follow closely on the heels of the Rosaceae in terms of edible fruit and dispersal of seeds by birds and mammals. Like the fruit in the rose family, most of these ripen in the late summer or fall, are deposited thereafter, and germinate in the spring. Hollies, catbriars, privets, junipers, and many other trees, shrubs, or groundcover plants don’t have nearly as many species, but their fruits are more persistent than those of members of the rose and heath families. They are still clinging to their perches in the middle of winter while the more tasty ones are long gone and serve as food for over-wintering birds and non-hibernating mammals, such as mockingbirds, house finches, wild turkeys, deer mice, squirrels, and foxes.
It is not by chance that flowering and seed-producing plants evolved in great numbers at about the same time the birds and mammals — the last vertebrates to evolve — were expanding their numbers of species halfway through the Cretaceous period, 80 million years ago. The higher plants and higher animals co-evolved. One fed the other and vice-versa. After all, you are what you eat, and not all of what you eat remains in your body.
|
Using Visuals to Build Interest and Understanding
Teaching history to English Language Learners poses special challenges owing to its conceptual density and assumed cultural knowledge. It seems obvious that ELLS need additional support and materials to understand content, yet many social studies classrooms are ill-stocked in this regard.
Here we outline how visuals can help ELLs build interest and understanding.
Although visuals make excellent learning tools for all students, beginning speakers of English may be able to secure meaning from visual sources they would be incapable of extracting from written sources. By visuals we mean, for example, photographs, graphs, maps, globes, charts, timelines, and Venn diagrams as well as the observation of artifacts and landscapes.
Consider the following images of tenements at the turn of the 20th century. In the case of New York City, the poor, the unskilled, and in many cases, immigrants, found cheap housing in tenements—rental apartment buildings that were often in disrepair or even unsafe.
Large families, or sometimes multiple families, lived in cramped, unhealthy conditions. Infant mortality was high, alcoholism common, and the tenements were a breeding ground for crime. Students can come to understand many of these social phenomena by viewing photographs taken during this time period.
[Yard of tenement, New York, N.Y.], c. 1905, Detroit Publishing Company Photograph Collection, Library of Congress: LC-D4-36489
Examine this photograph of a New York City tenement taken between 1900 and 1910 and use the questions to analyze it. The categories used for questioning the sequence is a way to classify stages of speech emergence.
The aim is to align linguistic ability with teaching strategies. [See handout.]
New York City Tenement with Laundry: Questioning Strategy
- Where are the buildings in this city? Point to them.
- Point to the laundry (clothes) shown in this photograph.
- Do you see any people in this photograph?
- Was this photo taken during the day or night?
- How many clothes lines are there?
- In which buildings do people live?
- Where is this scene taking place?
- In what year, approximately, was this photograph taken?
- During what time of year was this photo likely taken?
- What do you see in the distance?
- Why are the clothes hanging on lines outside the buildings?
- Would you like to live in one of these buildings?
- How might this scene look differently today?
- How is the laundry connected between buildings? Must the people in one building know the people in the buildings on the other side?
- Compare urban housing from 100 years ago to now. How are they similar or different? [Have students create a Venn Diagram of the similarities and difference.]
- Are there cities like this today?
Tenement apartments were also used for work. Already-crowded apartments often doubled as tenement shops, employing as many as 30 workers.
Examine this image of an Italian family making artificial flowers in a tenement building:
Ask why people would use their living quarters as both dwelling and workplace. Which family members might be more likely to participate in the work?
In conclusion, meaningful historical investigation is an achievable goal for English language learners in the self-contained classroom. While we make no claims that this is a completely painless process for either students or teachers, we have presented ideas on meeting the needs of ELLs while remaining faithful to conceptual learning objectives.
Kathryn Lindholm-Leavy and Graciela Borsato, “Academic Achievement,” in F. Genesee (Ed.), Educating English language learners (New York: Cambridge University Press, 2006), 192.
Cruz & Thornton, "Social Studies for English Language Learners: Teaching Social Studies that Matters," Social Education, in press.
See Cruz & Thornton book, 2009; see also, Jennifer Truran Rothwell, History Making and the Plains Indians, Social Education, 61, no. 1, pp., 4-9, 1996.
|
(also known as the Legume Family: Leguminosae)
Key Words: "banner, wings, and keel". Pea-like pods, often with pinnate leaves
If you have seen a pea or bean blossom in the garden, then you will be able to recognize members of the Pea family. These are irregular flowers, with 5 petals forming a distinctive "banner, wings, and keel", as shown in the illustration. The banner is a single petal with two lobes though it looks like two that are fused together. Two more petals form the wings. The remaining two petals make up the keel and are usually fused together. The proportions of the parts may vary from one species to another, but as long as there is clearly a banner, wings and keel, then the plant is a member of the Pea family. Pea-like pods are another distinctive trait of the family.
For practice, look at a head of clover in the lawn. You will see that each head is a cluster of many small Pea flowers, each with its own banner, wings, and keel. As the flowers mature each one forms a tiny pea-like pod. I'll bet you never noticed that before!
The Pea family is very large, with 600 genera and 13,000 species worldwide, all descendents of the very first Pea flower of many millions of years ago. Over time the Peas have adapted to fit many different niches, from lowly clovers on the ground to stately trees that today shade city sidewalks. Families this large often have subgroupings called subfamilies and tribes. It works like this:
The most closely related species are lumped together into a single group or "genus". For example, there are about 300 species of clover in the world. Each one is clearly unique, but each one is also a clover, so they are all lumped together as one genus, Trifolium (meaning 3-parted leaves) and given separate species names such as T. arvense or T. pratense, etc.
If you compare clovers to other members of the Pea family then you will see that they share more in common with alfalfa and sweet clover than with other plants like beans or caragana bushes. Therefore the clover-like plants are lumped together as the Clover tribe while the bean-like plants are lumped together as the Bean tribe, and so forth, for a total of eight tribes. Each of these tribes share the distinctive banner, wings and keel, so they are all lumped together as the Pea subfamily of the Pea family. All Peas across the northern latitudes belong to this group.
As you move south you will encounter more species of the Pea subfamily, plus other plants from the Mimosa and Bird-of-Paradise-Tree or Senna Subfamilies. These groups include mostly trees and shrubs, but also a few herbs. Their flowers do not have the banner, wings, and keel, however most have pinnate leaves, much like the one in the illustration, plus the distinctive pea-like pods that open along two seams. Each subfamily is distinct enough to arguably qualify as a family in its own right, but they still share enough pattern of similarity between them to lump them together as subcategories of a single family.
Overall, the plants of the Pea family range from being barely edible to barely poisonous. Some species do contain toxic alkaloids, especially in the seed coats. Many people are familiar with the story of Christopher McCandless who trekked into the Alaska wilderness in 1992 and was found dead four months later. He had been eating the roots of Hedysarum alpinum, and assumed the seeds were edible too, so he gathered and ate a large quantity of them over a two-week period. The seeds, however, contained the same toxic alkaloid found in locoweed, which inhibits an enzyme necessary for metabolism in mammals. It is now believed that McCandless was still eating, but starved to death because his body was unable to utilize the food. Even the common garden pea can lead to depression and nervous disorders with excess consumption. So it is possible to poison yourself with members of this family, but it takes some effort.
Pea Family / Pea Subfamily
Try to identify the banner, wings and keel in each of the pictures below.
Broom Tribe | Golden Pea Tribe | Licorice Tribe | Clover Tribe | Trefoil Tribe |
Mimosa Subfamily | Bird-of-Paradise Tree Subfamily
Please e-mail Thomas J. Elpel to report mistakes or to inquire about purchasing high resolution photos of these plants.
|
TIME for Kids: Frogs!
TIME for Kids: Frogs!
by the Editors of TIME for Kids with Kathryn Hoffman
When working with an informational text in the classroom, it's helpful to use a KWL chart to introduce the topic, access students' prior knowledge, and review what they have learned.
Begin by asking the students what they already know about frogs. Record this information in the "K" section of the chart.
Discuss with the students what types of things they want to learn about them. Record this in the "W" section of the chart. Knowing what the students want to learn can help you guide them through the book and focus on the information they will find most interesting.
At the very end, come back to the KWL chart. Give each student a post-it note and encourage them to write down at least one fact they learned by reading the book. When they are finished, the students can share their fact and stick the post-it in the "L" section of the chart.
A Frog's Life
From egg to tadpole to froglet to frog, and then back to egg again. A frog goes through many changes within its short life. Track the life of a frog in photographs with your students and make observations at every stage. If possible, bring in an aquarium with several tadpoles to your classroom. Have your students observe and record the tadpoles’ development in their science journals.
Pretty, But Poisonous
Many of the most brightly colored frogs are the most poisonous. Does this hold true for other species of animals as well? Discuss with your students why poisonous creatures tend to stand out so much. Brainstorm as many animals as possible, then chart the poisonous versus the non-poisonous.
- What are some facts that you learned about frogs?
- Write a story describing what it would be like if you were a frog.
Slippery Slimy Baby Frogs
Written by Sandra Markle
An introduction to baby frogs from around the world.
by Nic Bishop
Nic Bishop's photographs show all different kinds of frogs, big ones, very tiny ones, frogs with beautiful colors of skin, and one frog you can see inside of.
Face to Face with Frogs
by Mark W. Moffett
Learn about the diverse world of frogs; from metamorphosis to diet, from habitat to distinctive features.
|
What are Reticulocytes?
- 1 What are Reticulocytes?
- 2 What is Reticulocyte Count?
- 3 Necessity for Reticulocyte Count
- 4 Preparation
- 5 Drawing of Blood
- 6 How it Feels?
- 7 Risks Involved
- 8 Meaning of Test Results
- 9 What Can Affect Test Results?
- 10 Interpretation of Test Results
Reticulocytes are red blood cells that are not yet mature. These immature red blood cells are made in the bone marrow then released into the bloodstream. They circulate in the bloodstream and take two days to become mature red blood cells. Red blood cells are crucial in the transportation of oxygen in the body as they contain haemoglobin a compound that readily combines with oxygen before the cells release it to the entire body.
What is Reticulocyte Count?
Reticulocytes usually make up one percent of all red blood cells in the bloodstream at any one time. Reticulocyte count is performed to find out how the number/percentage immature red blood cells compared to the mature ones. in the bloodstream. This can be used to determine the rate at which the bone marrow produces reticulocytes.
Reticulocyte count used to be carried out by manually by using a microscope to inspect a slide that was specially stained then counting the number of reticulocytes in the field of view. This method had been replaced by the more accurate automated method where an instrument called a hematology analyzer is used.1
Necessity for Reticulocyte Count
- If abnormal results for a complete blood count or hematocrit are obtained, then a reticulocyte count may be ordered to determine the cause
- Doctors usually order a reticulocyte count for patients to diagnose anemia. The count helps determine if anemia is caused by inadequate production of reticulocytes or loss of mature red blood cells.
- Reticulocyte tests can also be used to monitor effectiveness of treatment for various conditions e.g. anemia.
- Reticulocyte counts are also helpful in diagnosing bone marrow disorders or monitor the effectiveness of a bone marrow transplant.2,3
No preparation is needed for a reticulocyte count though it is advised to wear a short sleeved shirt to allow medical professionals easy access when drawing blood. The doctor may however ask you to fast or stop taking certain medications e.g. blood thinners for a certain period before the test.1
Drawing of Blood
- A medical professional wraps a tourniquet (elastic band) on the upper arm stopping the flow of blood. The tourniquet applies pressure causing the veins below the band to swell up with blood so it easier to insert a needle.
- A vein usually on the back of the hand or on the inner side of the arm near the elbow is chosen.
- They then clean a section of the skin around the intended site of puncture using antiseptic or alcohol to kill any pathogens around.
- Blood is then drawn from the vein by inserting a butterfly needle and draining blood into a syringe or vial.
- Once drawing of blood is complete the tourniquet is removed.
- As the needle is removed a cotton ball or gauze pad is placed over the puncture site.
- Pressure is applied to the cotton or gauze to speed up the clotting of blood.
For younger children and infants, the doctor may opt to make a small cut on the skin instead of using a needle. A blood sample is collected using a slide or test slip after the cut begins to bleed. The area is then cleaned and bandaged if necessary.1,2,4
How it Feels?
Drawing blood for the reticulocyte count is a relatively painless experience. There is temporary discomfort that feels like a pinprick or sting when the needle is being inserted. Drawing blood only takes a few minutes and does not cause pain. The puncture site might have mild bruising that clears away in a few days depending on the person’s sensitivity.
Collecting a blood sample for a reticulocyte count is not a dangerous procedure but some complications may arise
- Difficulty locating a vein may cause pain due to many punctures made in attempts.
- Some people suffer a feeling of faintness or lightheadedness.
- Formation of a hematoma (blood accumulating under the skin) that causes a bruise or lump.
- Phlebitis occurs in a few cases. This is the swelling of the vein after drawing of a blood sample.
- Continuous bleeding may affect people with blood clotting problems or those taking blood thinners like Warfarin or Aspirin.
- An infection may develop at the puncture site.
Meaning of Test Results
Reticulocyte count is expressed as a percentage of the number of reticulocytes compared to the mature red blood cells. Range of results varies from lab to lab but approximately normal results show that: infants have a normal range of between 3% to 6% whiles adults’ range is between 0.5% – 1.5%.2,5
A high count indicates that the bone marrow is producing an unusually large number of reticulocytes.
Conditions of High Reticulocyte Count
- Diseases that cause premature haemolysis (destruction) of red blood cells e.g. hemolytic anemia.
- Excessive loss of blood.
- High altitude raises reticulocyte count to help the body cope with lower oxygen concentration high above sea level.
- Presence of a tumor causing excess erythropoietin
- Polycythemia vera
Low (Below Normal)
Low count indicates that an insufficient number of reticulocytes are being produced by the bone marrow.
Conditions of Low Reticulocyte Count
Reticulocyte count falls below normal under the following conditions:
- Deficiency of folic acid, Vitamin B-12 or iron.
- Some types of anemia e.g. pernicious anemia, iron deficiency anemia or aplastic anemia.
- Exposure to harmful levels radiation either through therapy or as a occupational hazard.
- Damage of the bone marrow by some types of medicine or long term infection.
- Chronic or long term alcoholism.
- Chronic or advanced kidney disease.
- Endocrine disease.
What Can Affect Test Results?
- A blood transfusion that was carried out less than three months before the reticulocyte count can affect the results. One should inform the doctor if they have had a recent blood transfusion.
- Some medicine and treatment options for some illnesses e.g. malaria, Parkinson’s disease, chemotherapy and rheumatoid arthritis affect the reticulocyte count.
- Pregnancy also affects the reticulocyte count.2,3
Interpretation of Test Results
Other tests can be used to supplement a reticulocyte count when further evaluation of a conditionis required; these include:
- Reticulocyte index
- Reticulocyte production index
- Immature reticulocyte fraction (reticulocyte maturity index)
- Bone marrow aspiration and biopsy
Reticulocyte count gives clues as to the disease affecting a patient but does not directly diagnose any particular condition. It requires further tests before a final diagnosis can be made, such tests may include:
- Iron Studies 3,4,5
- Blood Test: Reticulocyte Count. Kid’s Health – http://kidshealth.org/en/parents/reticulocyte.html?WT.ac
- Reticulocyte Count. WebMD – http://webmd.com/a-to-z-guides/reticulocyte-count
- Reticulocytes. Lab Tests Online – https://labtestsonline.org/understanding/analytes/reticulocyte/tab/test/
- Reticulocyte Count: Purpose, Procedure, and Results. Healthline – http://healthline.com/health/reticulocyte-count
- Reticulocyte Count and Reticulocyte Haemoglobin Content. Medscape – http://emedicine.medscape.com/article/2086146-overview
More from my site
- Deltoid Ligament
- Knee Ligaments
|
Lucy's Warbler is unique in the following ways:
- It is our smallest warbler
- It is our only desert warbler
- It is a cavity nester (one of only 2 warblers that do this -- the other being the Prothonotary)
- It is our least colorful warbler -- plumaged in pale grey and white
These distinctive warblers were seen by the dam in Sabino Canyon in the Coronado National Forest.
John Graham Bell. Like other vireos, the Bell's also shows the yellow and green hues which are typical of the family.
Bell's Vireo is classified as "Near Threatened" with a huge decline (a staggering loss of two-thirds of its population) over the last 4 decades in its population. Loss of habitat has been the main contributing factor. Recent conservation efforts have been successful in helping this species recover; however, the Californian subspecies remains classified as "Endangered".
|
, ancient Native American civilization on the coast of N Peru. Previously called Early Chimu (see Chimu
), the Mochica were warriors with a highly developed social and political organization. They built temples, pyramids, and aqueducts of adobe brick, were skilled in irrigation, and produced remarkable ceramics. In their stirrup jars, painted with scenes of everyday life, and their figure-modeled portrait jars they revealed fantasy and humor and achieved an astonishing fidelity to human forms. The civilization, which began c.100 B.C.
, is believed to have lasted 1,000 years.
The Columbia Electronic Encyclopedia Copyright © 2004.
Licensed from Columbia University Press
|
Tracheitis is the inflammation of the windpipe (trachea). Most cases of tracheitis are due to a bacterial infection, however, a number of other factors, both infectious and non-infectious, may also cause inflammation of the trachea. Usually these other factors do not affect the trachea in isolation but may also involve other structures higher up and lower down the respiratory tract.
Causes of Tracheitis
A number of different bacterial species are responsible for the majority of tracheitis cases. Frequently this occurs as a secondary bacterial infection that follows a viral respiratory tract infection like influenza (seasonal flu) or H1N1 swine flu. The common cold can also lead to tracheitis but this type of viral infection is usually isolated to the upper parts of the respiratory system.
Some of the bacteria that may be involed include Streptococcal species like S. pyogenes and S.pneumoniae, as well as Staphylococcus aureus and Haemophilus influenzae. Less frequently, other bacteria like the Klebsiella species may be involved. With the increased risk of superbugs like MRSA (methicillin resistant Staphylococcus aureus) and more recently the NDM-1 (New Delhi metallo-beta-lactamase-1) Klebsiella pneumoniae, bacterial tracheitis can be difficult to treat depending on the causative organism.
Other infections like pertussis (whooping cough) is more likely to cause an upper respiratory tact infection but a lower tract infection is also a possibility. Pertussis is caused by the bacteria Bordetella pertussis. Viral infections like croup, which is caused by a number of viral species particularly the parainfluenza species, may cause inflammation of the larynx and trachea (laryngotracheitis) or larynx, trachea and bronchi (laryngotracheobronchitis)
Signs and Symptoms of Tracheitis
- Retrosternal pain or discomfort (breastbone pain)
- Dry, painful cough – deep and bark-like in nature. May become a productive cough later with blood-stained mucus.
- Dysphonia (hoarse voice)
- Stridor (abnormal breathing sound)
- Sore throat
- Painful swallowing (odynophagia)
- Cyanosis, flaring of nostrils, difficulty breathing (dyspnea) and rapid, short breaths are a sign of respiratory distress that requires immediate medical attention.
- Bacterial Tracheitis. Emedicine
|
Hypoxia refers to low oxygen conditions. Normally, 20.9% of the gas in the atmosphere is oxygen. The partial pressure of oxygen in the atmosphere is 20.9% of the total barometric pressure. In water however, oxygen levels are much lower, approximately 1%, and fluctuate locally depending on the presence of photosynthetic organisms and relative distance to the surface (if there is more oxygen in the air, it will diffuse across the partial pressure gradient).
Atmospheric hypoxia occurs naturally at high altitudes. Total atmospheric pressure decreases as altitude increases, causing a lower partial pressure of oxygen which is defined as hypobaric hypoxia. Oxygen remains at 20.9% of the total gas mixture, differing from hypoxic hypoxia, where the percentage of oxygen in the air (or blood) is decreased. This is common, for example, in the sealed burrows of some subterranean animals, such as blesmols. Atmospheric hypoxia is also the basis of altitude training which is a standard part of training for elite athletes. Several companies mimic hypoxia using normobaric artificial atmosphere.
Oxygen depletion is a phenomenon that occurs in aquatic environments as dissolved oxygen (DO; molecular oxygen dissolved in the water) becomes reduced in concentration to a point where it becomes detrimental to aquatic organisms living in the system. Dissolved oxygen is typically expressed as a percentage of the oxygen that would dissolve in the water at the prevailing temperature and salinity (both of which affect the solubility of oxygen in water; see oxygen saturation and underwater). An aquatic system lacking dissolved oxygen (0% saturation) is termed anaerobic, reducing, or anoxic; a system with low concentration—in the range between 1 and 30% saturation—is called hypoxic or dysoxic. Most fish cannot live below 30% saturation. A "healthy" aquatic environment should seldom experience less than 80%. The exaerobic zone is found at the boundary of anoxic and hypoxic zones.
Hypoxia can occur throughout the water column and also at high altitudes as well as near sediments on the bottom. It usually extends throughout 20-50% of the water column, but depending on the water depth and location of pycnoclines (rapid changes in water density with depth). It can occur in 10-80% of the water column. For example, in a 10-meter water column, it can reach up to 2 meters below the surface. In a 20-meter water column, it can extend up to 8 meters below the surface.
Causes of hypoxia
Oxygen depletion can result from a number of natural factors, but is most often a concern as a consequence of pollution and eutrophication in which plant nutrients enter a river, lake, or ocean, and phytoplankton blooms are encouraged. While phytoplankton, through photosynthesis, will raise DO saturation during daylight hours, the dense population of a bloom reduces DO saturation during the night by respiration. When phytoplankton cells die, they sink towards the bottom and are decomposed by bacteria, a process that further reduces DO in the water column. If oxygen depletion progresses to hypoxia, fish kills can occur and invertebrates like worms and clams on the bottom may be killed as well.
Hypoxia may also occur in the absence of pollutants. In estuaries, for example, because freshwater flowing from a river into the sea is less dense than salt water, stratification in the water column can result. Vertical mixing between the water bodies is therefore reduced, restricting the supply of oxygen from the surface waters to the more saline bottom waters. The oxygen concentration in the bottom layer may then become low enough for hypoxia to occur. Areas particularly prone to this include shallow waters of semi-enclosed water bodies such as the Waddenzee or the Gulf of Mexico, where land run-off is substantial. In these areas a so-called "dead zone" can be created. Low dissolved oxygen conditions are often seasonal, as is the case in Hood Canal and areas of Puget Sound, in Washington State. The World Resources Institute has identified 375 hypoxic coastal zones around the world, concentrated in coastal areas in Western Europe, the Eastern and Southern coasts of the US, and East Asia, particularly in Japan.
Hypoxia may also be the explanation for periodic phenomena such as the Mobile Bay jubilee, where aquatic life suddenly rushes to the shallows, perhaps trying to escape oxygen-depleted water. Recent widespread shellfish kills near the coasts of Oregon and Washington are also blamed on cyclic dead zone ecology.
To combat hypoxia, it is essential to reduce the amount of land-derived nutrients reaching rivers in runoff. This can be done by improving sewage treatment and by reducing the amount of fertilizers leaching into the rivers. Alternately, this can be done by restoring natural environments along a river; marshes are particularly effective in reducing the amount of phosphorus and nitrogen (nutrients) in water. Other natural habitat-based solutions include restoration of shellfish populations, such as oysters. Oyster reefs remove nitrogen from the water column and filter out suspended solids, subsequently reducing the likelihood or extent of harmful algal blooms or anoxic conditions. Foundational work toward the idea of improving marine water quality through shellfish cultivation was conducted by Odd Lindahl et al., using mussels in Sweden.
Technological solutions are also possible, such as that used in the redeveloped Salford Docks area of the Manchester Ship Canal in England, where years of runoff from sewers and roads had accumulated in the slow running waters. In 2001 a compressed air injection system was introduced, which raised the oxygen levels in the water by up to 300%. The resulting improvement in water quality led to an increase in the number of invertebrate species, such as freshwater shrimp, to more than 30. Spawning and growth rates of fish species such as roach and perch also increased to such an extent that they are now amongst the highest in England.
In a very short time the oxygen saturation can drop to zero when offshore blowing winds drive surface water out and anoxic depth water rises up. At the same time a decline in temperature and a rise in salinity is observed (from the longterm ecological observatory in the seas at Kiel Fjord, Germany). New approaches of long-term monitoring of oxygen regime in the ocean observe online the behavior of fish and zooplankton, which changes drastically under reduced oxygen saturations (ecoSCOPE) and already at very low levels of water pollution.
- Algal blooms
- Anoxic event
- Dead zone (ecology)
- Hypoxia in fish
- Ocean deoxygenation
- Oxygen minimum zone
- Brandon, John. "The Atmosphere, Pressure and Forces". Meteorology. Pilot Friend. Retrieved 21 December 2012.
- "Dissolved Oxygen". Water Quality. Water on the Web. Retrieved 21 December 2012.
- Roper, T.J.; et al. (2001). "Environmental conditions in burrows of two species of African mole-rat, Georychus capensis and Cryptomys damarensis". Journal of Zoology. 254 (1): 101–107. doi:10.1017/S0952836901000590.
- Rabalais, Nancy; Turner, R. Eugene; Justic´, Dubravko; Dortch, Quay; Wiseman, William J. Jr. Characterization of Hypoxia: Topic 1 Report for the Integrated Assessment on Hypoxia in the Gulf of Mexico. Ch. 3. NOAA Coastal Ocean Program, Decision Analysis Series No. 15. May 1999. < http://oceanservice.noaa.gov/products/hypox_t1final.pdf>. Retrieved February 11, 2009.
- Encyclopedia of Puget Sound: Hypoxia http://www.eopugetsound.org/science-review/section-4-dissolved-oxygen-hypoxia
- Selman, Mindy (2007) Eutrophication: An Overview of Status, Trends, Policies, and Strategies. World Resources Institute.
- oregonstate.edu – Dead Zone Causing a Wave of Death Off Oregon Coast (8/9/2006)
- Kroeger, Timm (2012) Dollars and Sense: Economic Benefits and Impacts from two Oyster Reef Restoration Projects in the Northern Gulf of Mexico. TNC Report.
- Lindahl, O.; Hart, R.; Hernroth, B.; Kollberg, S.; Loo, L. O.; Olrog, L.; Rehnstam-Holm, A. S.; Svensson, J.; Svensson, S.; Syversen, U. (2005). "Improving marine water quality by mussel farming: A profitable solution for Swedish society". Ambio. 34 (2): 131–138. doi:10.1579/0044-7447-34.2.131. PMID 15865310.
- Hindle, P.(1998) (2003-08-21). "Exploring Greater Manchester — a fieldwork guide: The fluvioglacial gravel ridges of Salford and flooding on the River Irwell" (pdf). Manchester Geographical Society. Retrieved 2007-12-11. p.13
- Kils, U., U. Waller, and P. Fischer (1989). "The Fish Kill of the Autumn 1988 in Kiel Bay". International Council for the Exploration of the Sea. C M 1989/L:14.
- Fischer P.; U. Kils (1990). "In situ Investigations on Respiration and Behaviour of Stickleback Gasterosteus aculeatus and the Eelpout Zoaraes viviparus During Low Oxygen Stress". International Council for the Exploration of the Sea. C M 1990/F:23.
- Fischer P.; K. Rademacher; U. Kils (1992). "In situ investigations on the respiration and behaviour of the eelpout Zoarces viviparus under short term hypoxia". Mar Ecol Prog Ser. 88: 181–184. doi:10.3354/meps088181.
|
BMI is a commonly used term in healthcare and fitness. BMI stands for body mass index. It came into use in the mid-19th century to identify an abnormal weight and height proportion in people. It is calculated using one’s weight in kilograms divided by height in meters. Adolphe Quetelet was a mathematician that developed the BMI measurement. He came up with the measurement because he was searching for a way to study the height and weight of populations (Zierle-Ghosh, 2022).
Today BMI is often used as a screening method for being overweight or obese. When you go to a doctor’s appointment they usually record your height and weight to calculate your BMI. Healthcare professionals are educated to use BMI levels as an indication of health but the overall patient’s assessment and history must also be considered.
BMI is a useful tool because it is simple, does not cost much money, and is noninvasive. Whereas other tests are much more expensive, not easily accessible to a primary care office, and/or are difficult to standardize. The CDC says that healthcare practitioners need to consider many factors when testing BMI and it should not be the only measurement considered regarding health.
The BMI of adults who are 20 years old or greater is all assessed using the same scale. The Centers for Disease Control’s BMI assessment categories are as follows; A BMI below 18.5 is considered underweight. A BMI of 18.5 to 24.9 is considered normal. A BMI of 25 to 29.9 is considered overweight. A BMI of 30 or greater is considered obese.
Many people perceive BMI as being a test of body fat but it actually measures excess weight. Studies have shown that BMI levels are linked to future health risks but it is not the only factor to consider. The CDC considers BMI an appropriate measure for obesity screening and health risks related to obesity. Other factors such as age, sex, ethnicity, and muscle mass must be considered. For example, a bodybuilder will weigh more and their BMI may put them in the overweight or obese category, even though the bodybuilder’s extra weight is due to muscle mass and not excess fat. This is just one example of why a patient’s whole assessment and history need to be considered.
Research studies indicate that higher BMIs are linked with obesity-related health problems. BMI should be used as an initial screening for adults but it needs to be considered along with other factors such as fitness level, genetics, age, and fat distribution. BMI testing is not perfect and can be skewed. BMI is not supposed to be used as a diagnostic tool, it is just a screening tool. It does not measure body fat directly.
According to Dr. Robert Shmerling at Harvard Health Publishing, “In general, the higher your BMI, the higher the risk of developing a range of conditions linked with excess weight including diabetes, arthritis, liver disease, cancer, hypertension, high cholesterol, and sleep apnea”. He believes BMI can be useful but it is not a perfect measurement of health. It is good for identifying someone who is at risk for certain conditions based on weight but there are other factors that need to be considered (Shmerlin, 2020).
BMI is a useful tool to measure excess body weight but not excess fat. It should not be the only indicator of health because there are other factors that must be considered when assessing a patient’s health such as age, fitness level, genetics, and fat distribution. Results of the BMI screening could be skewed due to various factors. In general, people with higher BMIs are at greater risk of obesity related diseases.
Centers for Disease Control and Prevention. (2021, August 27). About adult BMI. Centers for Disease Control and Prevention. Retrieved March 1, 2022, from https://www.cdc.gov/healthyweight/assessing/bmi/adult_bmi/index.html
Body mass index: Considerations for Practitioners. Centers for Disease Control. (n.d.). Retrieved February 28, 2022, from https://www.cdc.gov/obesity/downloads/BMIforPactitioners.pdf
Shmerling, R. H. (2020, June 22). How useful is the body mass index (BMI)? Harvard Health. Retrieved February 28, 2022, from https://www.health.harvard.edu/blog/how-useful-is-the-body-mass-index-bmi-201603309339
Zierle-Ghosh A, Jan A. Physiology, Body Mass Index. [Updated 2021 Jul 22]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK535456/
|
Plague, caused by Yersinia pestis, is an acute and sometimes fatal bacterial zoonosis transmitted primarily by the fleas of rats and other rodents. Enzootic foci of sylvatic plague exist in the western USA and throughout the world, including Eurasia, Africa, and North and South America. In addition to rodents, other mammalian species that have been naturally infected with Y pestis include lagomorphs, felids, canids, mustelids, and some ungulates. Domestic cats and dogs have been known to develop plague from oral mucous membrane exposure to infected rodent tissues, typically when they are allowed to roam and hunt in enzootic areas. Birds and other nonmammalian vertebrates appear to be resistant to plague. On average, 10 human plague cases are reported each year in the USA; most are from New Mexico, California, Colorado, and Arizona. Most human cases result from the bite of an infected flea, although direct contact with infected wild rabbits, rodents, and occasionally other wildlife and exposure to infected domestic cats are also risk factors.
Y pestis is a gram-negative, nonmotile, coccobacillus belonging to the Enterobacteriaceae family. It exhibits a bipolar staining, “safety pin” appearance when stained with Wright, Giemsa, or Wayson stains. Y pestis grows slowly even at optimal temperatures (28°C [82.4°F]) and can require ≥48 hr to produce colonies. Several types of media can be used to grow Y pestis, including blood agar, nutrient broth, and unenriched agar. Colonies are small (1–2 mm), gray, nonmucoid, and have a characteristic “hammered copper” appearance. Different virulence factors are expressed by the organism at different temperatures and environments, allowing the organism to survive in flea vectors and then be transmitted to and multiply in mammalian hosts. The organism does not survive for long at high temperatures or in dry environments.
Epidemiology and Transmission:
Y pestis is maintained in the environment in a natural cycle between susceptible rodent species and their associated fleas. Commonly affected rodent species include ground squirrels (Spermophilus spp) and wood rats (Neotoma spp). Cats and dogs are usually exposed to Y pestis by mucous membrane contact with secretions or tissues of an infected rodent or rabbit or by the bite of an infected flea. People are usually exposed by an infected flea bite but are sometimes exposed due to contact with infected animals or via respiratory droplet transmission from pneumonic cases. Risk factors for cats acquiring plague include hunting and eating rodents and rabbits, visiting an enzootic plague area, finding dead rodents around the yard or areas that the animal frequents, and exposure to infected fleas. Plague epizootics cause nearly 100% mortality in affected wild rodent and rabbit populations. Once their host has died, Y pestis–infected rodent and rabbit fleas will seek other hosts, including cats and dogs, and potentially be transported into homes. Rodent and rabbit flea species are different from dog and cat fleas (Ctenocephalides spp), although most veterinarians and pet owners will not be able to visually distinguish flea species. Dog and cat fleas are rare in most plague-enzootic areas of the western USA; therefore, fleas on pets in these areas may be more likely to be fleas from wildlife, including rodents or rabbits.
Fleas become infected with Y pestis when feeding on a bacteremic mammal. It has been thought that most flea transmission of plague occurs when the bacteria multiply and block the flea’s digestive tract, preventing it from digesting subsequent blood meals; the flea then regurgitates the plague bacteria and inoculates the host on which it is attempting to feed. Recent experiments have shown that some species of unblocked fleas are better transmitters of plague. These fleas became infectious within a day of feeding and remained infectious for ≥4 days, also inoculating the host on which they are feeding with plague bacteria that have been multiplying within the flea's upper GI tract. In mammalian hosts, plague presents clinically in one of three forms: bubonic, septicemic, or pneumonic. After inoculation into the skin by a flea bite or into mucous membranes by contact with infectious secretions or tissues, the bacteria travel via lymphatic vessels to regional lymph nodes. These infected lymph nodes are called buboes, the typical lesion of bubonic plague.
Secondary septicemic plague can develop when the organism spreads from the affected lymph nodes via the bloodstream but can also occur without prior lymphadenopathy (primary septicemic plague) and affect numerous organs, including the spleen, liver, heart, and lungs. Pneumonic plague can develop from inadequately treated septicemic plague (secondary pneumonic plague) or from infectious respiratory droplets (primary pneumonic plague), typically from a coughing pneumonic plague patient (animal or human).
Clinical Findings and Lesions:
The clinical presentation of plague in cats is most commonly bubonic plague. The incubation period ranges from 1–4 days. Cats with bubonic plague typically present with fever, anorexia, lethargy, and an enlarged lymph node that may be abscessed and draining. Oral and lingual ulcers, skin abscesses, ocular discharge, diarrhea, vomiting, and cellulitis have also been documented. A retrospective review of 119 naturally infected cats found that 53% of cats had bubonic plague; of those, 75% had submandibular lymphadenopathy, with bilateral enlargement in ~⅓ of cases. Affected lymph nodes show necrosuppurative inflammation, edema, and hemorrhages and contain numerous Y pestis organisms. In experimentally infected cats, fever was as high as 106°F (41°C), peaking ~3 days after exposure; mortality was as high as 60% in untreated cats. Ten of 16 (62.5%) cats exposed orally developed enlarged lymph nodes in the medial retropharyngeal, submandibular, sublingual, and tonsillar regions, palpable 4–6 days after exposure. Y pestis was isolated from the throats of 15 of these cats. In 6 subcutaneously exposed cats (mimicking a flea bite), none had palpably enlarged lymph nodes in the head or neck region, but four had subcutaneous abscesses at the inoculation site.
Cats with primary septicemic plague have no obvious lymphadenopathy but present with fever, lethargy, and anorexia. Septic signs may also include diarrhea, vomiting, tachycardia, weak pulse, prolonged capillary refill time, disseminated intravascular coagulopathy, and respiratory distress. Primary pneumonic plague has not been documented in cats. Cats with secondary pneumonic plague may present with all the signs of septicemic plague along with a cough and other abnormal lung sounds. Characteristic necropsy findings can include livers that are pale with light-colored necrotic nodules, enlarged spleens with necrotic nodules, and lungs with diffuse interstitial pneumonia, focal congestion, hemorrhages, and necrotic foci.
Dogs infected with plague are less likely to develop clinical illness than cats, although cases have been seen in enzootic areas. Symptomatic plague infection has been documented in three naturally infected dogs; clinical signs included fever, lethargy, submandibular lymphadenopathy, a purulent intermandibular lesion, oral cavity lesions, and cough.
Cattle, horses, sheep, and pigs are not known to develop symptomatic illness from plague, whereas clinical illness has been documented in goats, camels, mule deer, pronghorn antelope, nonhuman primates, and a llama. Infected mountain lions and bobcats have shown clinical signs and mortality similar to those of domestic cats.
Plague must be differentiated from other bacterial infections, including tularemia ( see Tularemia Tularemia ), abscesses due to wounds (cat fight bites), and staphylococcal and streptococcal infections. During acute illness, preferred antemortem samples for culture include whole blood, lymph node aspirates, swabs from draining lesions, and oropharyngeal swabs from cats with oral lesions or pneumonia. Diagnostic samples should be taken before antibiotics are administered. Y pestis cultures can take 48 hr for visible growth to develop. An air-dried glass slide smear of a bubo aspirate can be used for a fluorescent antibody test that detects the F1 antigen on Y pestis cells. This test can be performed in a matter of hours in an experienced laboratory and is both sensitive and specific.
Postmortem specimens should include samples of liver, spleen, and lung (for pneumonic cases) and affected lymph nodes. In areas where tularemia is also present, samples should be collected under a biosafety hood, or the entire animal submitted to a veterinary diagnostic laboratory where aerosol precautions can be implemented. Serologic antibody tests can be confirmatory but require acute and convalescent samples taken 2–3 wk apart, demonstrating a 4-fold rise in antibody titer. Single acute sera are often negative if taken early in the course of illness or can be problematic in an enzootic area where animals may retain antibody titers from previous exposures.
Because of the rapid progression of this disease, treatment for suspected plague (and infection control practices) should be started before a definitive diagnosis is obtained. Streptomycin has been considered the drug of choice in human cases but is difficult to obtain and rarely used today. Gentamicin is currently used to treat most human plague cases and should be considered a suitable alternative choice in veterinary medicine for seriously ill animals, although it is not approved for this purpose. Animals with renal failure require adjusted dosages.
Doxycycline is appropriate for treatment of less complicated cases and to complete treatment of seriously ill animals after clinical improvement. Tetracycline and chloramphenicol are also options. Penicillins are not effective in treating plague. In treatment studies with experimentally infected mice, the fluoroquinolones performed as well as streptomycin. Fluoroquinolones have not been studied in any veterinary clinical trials, but there is growing evidence from their use in enzootic areas that they are effective in the treatment of plague in dogs and cats. The recommended duration of treatment is 10–21 days, with clinical improvement (including defervescence) expected within a few days of treatment initiation.
The duration of infectivity in treated cats is not definitively known, but cats are thought to be noninfectious after 72 hr of appropriate antibiotic therapy with indications of clinical improvement. During this infectious period, cats should remain hospitalized, especially if there are signs of pneumonia. Human cases have occurred in cat owners trying to give oral medications at home, exposing them to contact with the oral cavity and associated infectious secretions.
Prevention and Zoonotic Risk:
Along with treatment and diagnostic considerations, protection of people and other animals and initiation of public health interventions are critical when an animal is suspected to have plague. Animals with signs suggestive of plague should be placed in isolation, with infection control measures implemented for the protection of staff and other animal patients without waiting for a definitive diagnosis. The use of gloves, surgical masks, eye protection (if splashes or sprays are anticipated), patient isolation (animal or human), and standard hygiene and disinfection procedures for protection from potentially contaminated respiratory droplets, body fluids, and secretions from the patient (animal or human) are essential. Of the 23 human patients who developed cat-associated plague in the USA between 1977 and 1998, 6 were veterinary staff; the rest were cat owners or others handling a sick cat. After pneumonia has been excluded, or once there is evidence of clinical improvement after 72 hr of appropriate therapy, isolation procedures may be relaxed, but standard disinfection and hygiene procedures should continue.
Local or state public health officials should be notified promptly when plague is suspected to help conduct appropriate diagnostic tests, initiate an environmental investigation, and assess the need for fever watch or prophylactic antibiotics in potentially exposed people. To decrease the risk of pets and people being exposed to plague, pet owners in enzootic areas should keep their pets from roaming and hunting, limit their contact with rodent or rabbit carcasses, and use appropriate flea control. Epidemiologic data, fact sheets, public education brochures, and other information on plague is available on the Web sites of the CDC (http://cdc.gov/plague/) and the New Mexico Department of Health (http://nmhealth.org/ERD/Healthdata/plague.shtml) .
|
Teaching does not merely entail standing before learners and talking. A teacher has to contemplate on how to present certain topic so that all learners in a class understand it effectively. This thoughtful planning of a lesson involves strategy. A teacher has to come up with the best teaching strategy for his or her learners bearing in mind that every learner is unique. Many teaching strategies can be applied to make learning meaningful and effective to learners. This paper will endeavor to look at storyline as one of the teaching strategies. Storyline approach is a topic based way of teaching in which story plays a central role, as a context for educational contents and as structure for planning of the educational process. The approach tries to give an answer on the occurring inconsistency of a lot of contemporary curricula, which are dominated by a subject structure. Subjects are used as containers for content, which correspond with aims and targets indicated by custom, cultural heritage, new experiences, insights and developments in society and science. Subject structure in itself, is a physical organising framework for content in education, but it can easily lead to inconceivable subdivision into content containers, altering the meaningful entirety of education for learners. In national curricula, as well as in teacher training programmes and educational theories, there is a propensity towards establishing more consistency and relevance in the curricular offering and in the teaching and learning practice without losing relevant aspects of curricular content (Bell, Steve, Harkness, Sallie and White, Graham 24).
Background of storyline teaching strategy
This method of teaching was first developed by a group of tutors from Glasgow, Scotland. There was a wake up call for Scottish teachers to develop teaching strategy that greatly integrated varied teaching approaches in their curriculum. During that period, most of the teaching approaches were divided into small components. There was further need for development of a teaching strategy that focused more on child learning than heavily depending on use of textbooks. The main aim was to develop necessary classroom contexts in which children could learn particular skills and concepts and would attain constructive learning attitudes. Tutors were given time to interact with teachers and other stakeholders who plaid role in curriculum development. This led to the birth of storyline approach as a teaching strategy (Stewart-Rinier and Lund-Kristensen PP.1&2).
How storyline can be used in learning environment
Storyline approach is based on recent developmental psychology and language acquisition research. Being a learner centered method for teaching of a foreign language, it is very student-oriented, relies on the students’ experiences outside the classroom and their previous knowledge from the very first stages of their language learning process. In order to be in accordance with the constructivist standard of learning, students construct and build their new knowledge structures around the themes that the teacher introduces. The teacher should formulate the questions and tasks to direct the students’ attention and prospect towards what they are soon going to learn about and to trigger their prior knowledge about the topic.
The storyline approach is about stories. It triggers the children’s example of a story structure stored in their memories since they have been told so many fairytales. They know and expect the typical setting of a story (its place and time which form the background of it), the main characters and key actors, the chain of events each with a problem and its solution leading to their completions. This structure is called story grammar and its method uses the memory structure of a story that children already have in order to teach them new content and language (Bell Par. 4).
The storyline method provides a lot of opportunities for drama activities. Being actually a drama consisting of separate acts and acting out roles, it provides opportunities for the practice of functional language, which is chunks of language with certain functions (e.g. greetings, introducing people, making promises, sharing wishes, hopes and desires and so on). It provides a context that is not a genuine one, but one that is rooted in a story structure, resembling true life. Some of them may have scripts while others may be free, and open to students’ creativity. The more guided role-plays follow the principles of communicative language teaching. They generally represent a very resourceful means for offering great opportunities for adaptations of contexts and language learning goals.
The importance of long lasting learning is interlinked to a successful language learning mechanisms. Storyline is one of the best methods to use when teaching new or foreign language to learners. Expert in storyline technique understands that there is need for his or her learners to continue improving their talking capability. At different stages, a learner finds himself or herself almost mastering the foreign language. However, it is difficult for learners to ever become fluent in the language as the native speakers. For learners to quickly learn foreign languages, they need to frequently interact with native speakers of the language. They need to ensure that whatever they convey is understood and they are capable of comprehending whatever is conveyed to them. This can only be achieved through use of storyline teaching strategy. Storyline approach provides a situation where learners are capable of interacting with other persons who might be of great help in their understanding of the language. Here learners are able to free themselves from worries of making mistakes. Teachers are aware of all manners in which learners are expected to behave and thus are capable of coping with them (Sung 3).
Reasons why the strategy proves effective in teaching
The storyline approach uses the power of stories as significant carriers of consistent educational content. Story is an important giver and in that way, a crucial feature of the curriculum and the motivation of learners. American psychologist Jerome Bruner says that the distinctive form of framing experience (and our memory of it) is in narrative form. What does not get structured narratively suffers loss in memory. Stories do motivate children; they bring things together and make them alive. The approach is concerned with the story as an element of evidence, acts and imagination in a narrative structure, with a plot, actors, scenes and incidents. The storyline approach uses also linear and encrusted structure of stories as a model for organizing educational activities in a thematically context. Children are taken by stories. They identify with the main characters in a story. They experience situations as if they are personally involved. Stories appeal to their imagination. Children anticipate to situations, imagine how things can go. They become surprised too, if a story takes unexpected turns. The relation between education and narratives is not strange. In fact, narratives are the oldest way of transferring information, from one person to another and from generation to generation (Bell Par. 6).
Storyline helps students develop the capacity of collaboration. In this approach students learn skill of working and solving problems together. No man is an island. This means that every man has to depend on others in one way or another. To improve this there is need for people to learn on how to relate with one another productively. Most of problems encountered in our day to day life require collaboration. Every state is looking for assistant from other states. By taking students through storyline approach, they are able to be prepared for future relation with their colleagues (Stewart-Rinier and Lund-Kristensen 12). Through this approach students develop culture of interdependence.
Storyline approach of teaching acknowledges the restrictive impacts that actual observations can have on learners’ originality and imagination at a tender stage of their learning. For instance, when a group of learners is asked to come up with a model of hospital, the physical model they develop reflects their actual perception of how a hospital looks like. After coming up with a model, learners are given an opportunity to visit a hospital. The essence of such a visit to learners after they have contemplated and come up with a hospital model improves their understanding of how a hospital looks like. As they organize for the visit, they usually have slight idea of what they expect to see and they are able to understand things they did not include in their models. Storyline helps in improving learners understanding of what they are being taught as well as correcting their past perception of things.
Storyline is growing in popularity due to its capability of incorporating varied curriculum. Sound knowledge theory and efficient teaching schemes are organized in a user friendly manner. The method provides a flexible environment for instructors when it comes to organizing their work plan. Due to its capability of encouraging cooperation in developing the story, it motivates both learners and their teachers thus ensuring that everybody plays a role in accomplishing their goals. By cooperating in development of the story, students are able to comprehend the relevance of the study hence understanding it more. The level of participation and sense of possession by students motivates them to take a bigger responsibility in their personal learning (Sung 5).
Storyline entails a combination of thoughts and actions. This provides an opportunity for every learner to nurture their skills, ability and experiences. Learners become more conscious of their learning proficiency. In addition, students are able to come up with materials that they feel fond of. They feel pride of having invented something in their learning process and always wish to be associated with it. This improves their desire to go on with the system of learning. Storyline helps in developing positive learning among learners as they all feel to be part and parcel of learning. It also eliminates occasions where learners feel to have unaddressed needs as they are given an opportunity to air all their problems. Teachers are able to eliminate any bias in students that might have resulted from their previous studies. They get to know their skewed teaching methods leading to them opting to alter them. It helps in improving teachers’ way of addressing problems. Despite teachers conducting numerous researches, most of them do not abandon their old way of thinking (Beijaard, Drieland and Verloop Par. 2-7).
Principles of storyline method
For this technique to be successful there are various principles that teachers need to bear in mind. Teachers should ensure that they develop the sense of anticipation in their learners. A good story draws the attention of its listeners as they get more eager to know what happens next. For students to understand the topic, they need to be enthusiastic of knowing what follows. This helps them follow the teaching stage by stage. Anticipation is also manifested at the end of the lesson where learners are seen to look forward for the next lesson. Through developing anticipation in learners, it helps in ensuring that learning continues both when students are at school or at home. There is therefore great need for teachers to ensure that they fully involve their students in their teaching in order to make them feel as part of the learning process. This help students remember whatever they are taught and prepare them to be ready to contribute in the following lessons. There is need for teacher to develop their stories based on past experience. Students understand well the topic if taken from the known to the unknown. Coming up with a clear context of the study helps students understand the importance of the study (Bell and Harkness 6).
Ensuring that there is full partnership between the teacher and students is paramount. Storyline is referred to as collaborative story making due to incidences where both the teacher and students take active role in developing the lesson. The teacher needs to come up with a curriculum to be followed in teaching. In the curriculum, he or she needs to provide for flexibilities where students will play an active role in controlling their learning process. Teachers have to ensure that their students feel to own the learning process. They have to ensure that their lessons begin by seeking students’ perception on various issues. This makes students feel to have an influence on the less. In other modes of learning, students are perceived as not having ideas regarding the covered topic. Their role in the lesson is to listen as teachers feed them with information. Seeking ideas from students frees their mentality of not knowing anything regarding the topic thus helping them participate fully in the lesson (Bell and Harkness 8).
Storyline is a teaching approach that is different from others because it recognizes the value of existing knowledge of the learner. Learners are asked questions that help in creating a setting within the frame work of a story. Both the teacher and learner make a situation through visualization, which is a stimulus in learning. The strategy is effective in art such as making mosaics or collages. It helps in integrating students’ prior knowledge with the curriculum. Best teachers identify their learners’ individual needs and provide them with learning practices that best suit them. It does not matter whether the practices comply with the set guidelines on teaching. They recognize that learners are unique and a strategy that is successful with one student might be less successful with another student. Learners are given an opportunity to come up with their own stories based on their experience through the help of their instructors. The approach provides a wide range of opportunities where learners are capable of assessing themselves as they progress with their studies. They are also capable of monitoring and evaluating their learning outcomes.
Beijaard, Douwe, Driel, Jan and Verloop, Nico. “Evaluation of story-line methodology in research on teachers’ practical knowledge.” 1999. Web.
Bell, Steve. “Introduction to the Storyline Method“. Articles on The Storyline Method.
De Akelei, Assendelft, The Netherlands 2006. Web.
Bell, Steve, Harkness, Sallie and White, Graham (ed.). Storyline – Past, Present & Future. Glasgow: Enterprising Careers, University of Srathclyde. 2007.
Bell, Steve and Harkness, Sallie. Storyline – Promoting Language Across the Curriculum. Royston: UKLA Minibook series. 2006.
Stewart-Rinier, Todd and Lund-Kristensen, Hanne. “Storyline at a distance.” 2009. Web.
Sung, Hyekyung. “Enhancing Teaching Strategies based on Multiple Intelligences.”2004.
|
The tick-borne encephalitis (TBE) is an infection, caused by the flavivirus from group B. Humans are infected by virus transmitted by the bite of an infected tick or, less commonly, by drinking unpasteurised milk from infected goats or other mammals. The disease is endemic in forested parts of western, central and eastern Europe and Scandinavia, (Central European encephalitis), and in the Far East, (Russian Spring-Summer encephalitis). The disease is closely connected with the distribution if its vector, Ixodes ricinus, vector of Central European Encephalitis and the distrubution of Ixodex persulcatus – vector of Russian Spring-Summer encephalitis. TBE virus is believed to cause at least 11 000 human cases in Russia annually, whereas in Europe about 3 000 cases are reported each year(12) . The incubation period is 8-14 days, the therapy is symptomatic. Vaccination is recommended to start in autumn or in winter.
Key Words: Tick-borne Encephalitis, flaviviruses, Ixodes ricinus, prevention of Tick-borne Encephalitis
|
|<-- Back to Previous Page||TOC||Next Section -->|
Chapter 5: The Transformation of Sound by Computer
Section 5.3: Localization/Spatialization
Humans have a pretty complicated system for perceptually locating sounds, involving, among other factors, the relative loudness of the sound in each ear, the time difference between the sounds arrival in each ear, and the difference in frequency content of the sound as heard by each ear. How would a "cyclaural" (the equivalent of a "cyclops") hear? Most attempts at spatializing, or localizing, recorded sounds make use of some combination of factors involving the two ears on either side of the head.
Simulating Sound Placement
Simulating a loudness difference is pretty simpleif someone standing to your right says your name, their voice is going to sound louder in your right ear than in your left. The simplest way to simulate this volume difference is to increase the volume of the signal in one channel while lowering it in the otheryouve probably used the pan or balance knob on a car stereo or boombox, which does exactly this. Panning is a fast, cheap, and fairly effective means of localizing a signal, although it can often sound artificial.
Interaural Time Delay (ITD)
Simulating a time difference is a little trickier, but it adds a lot to the realism of the localization. Why would a sound reach your ears at different times? After all, arent our ears pretty close together? Were generally not even aware that this is true: snap your finger on one side of your head, and youll think that you hear the sound in both ears at exactly the same time.
But you dont. Sound moves at a specific speed, and its not all that fast (compared to light, anyway): about 345 meters/second. Since your fingers are closer to one ear than the other, the sound waves will arrive at your ears at different times, if only by a small fraction of a second. Since most of us have ears that are quite close together, the time difference is very slighttoo small for us to consciously "perceive."
Lets say your head is a bit wide: roughly 250 cm, or a quarter of a meter. It takes sound around 1/345 of a second to go 1 meter, which is approximately 0.003 second (3 thousandths of a second). It takes about a quarter of that time to get from one ear of your wide head to the other, which is about 0.0007 second (0.7 thousandths of a second). Thats a pretty small amount of time! Do you believe that our brains perceive that tiny interval and use the difference to help us localize the sound? We hope so, because if theres a frisbee coming at you, it would be nice to know which direction its coming from! In fact, though, the delay is even smaller because your heads smaller than 0.25 meter (we just rounded it off for simplicity). The technical name for this delay is interaural time delay (ITD).
To simulate ITD by computer, we simply need to add a delay to one channel of the sound. The longer the delay, the more the sound will seem to be panned to one side or the other (depending on which channel is delayed). The delays must be kept very short so that, as in nature, we dont consciously perceive them as delays, just as location cues. Our brains take over and use them to calculate the position of the sound. Wow!
Modeling Our Ears and Our Heads
That the ears perceive and respond to a difference in volume and arrival time of a sound seems pretty straightforward, albeit amazing. But whats this about a difference in the frequency content of the sound? How could the position of a bird change the spectral makeup of its song? The answer: your head!
Imagine someone speaking to you from another room. What does the voice sound like? Its probably a bit muffled or hard to understand. Thats because the wall through which the sound is travelingbesides simply cutting down the loudness of the soundacts like a low-pass filter. It lets the low frequencies in the voice pass through while attenuating or muffling the higher ones.
Your head does the same thing. When a sound comes from your right, it must first pass through, or go around, your head in order to reach your left ear. In the process, your head absorbs, or blocks, some of the high-frequency energy in the sound. Since the sound didnt have to pass through your head to get to your right ear, there is a difference in the spectral makeup of the sound that each ear hears. As with ITD, this is a subtle effect, although if youre in a quiet room and you turn your head from side to side while listening to a steady sound, you may start to perceive it.
Modeling this by computer is easy, provided you know something about how the head filters sounds (what frequencies are attenuated and by how much). If youre interested in the frequency response of the human head, there are a number of published sources available for the data, since they are used by, among others, the government for all sorts of things (like flight simulators, for example). Researcher and author Durand Begault has been a leading pioneer in the design and implementation of what are called head transfer functionsfrequency response curves for different locations of sound.
What Are Head-Related Transfer Functions (HRTFs)?
Not surprisingly, humans are extremely adept at locating sounds in two dimensions, or the plane. Were great at figuring out the source direction of a sound, but not the height. When a lion is coming at us, its nice of evolution to have provided us with the ability to know, quickly and without much thought, which way to run. Its perhaps more of a surprise that were less adept at locating sounds in the third dimension, or more accurately, in the "up/down" axis. But we dont really need this ability. We cant jump high enough for that perception to do us much good. Barn owls, on the other hand, have little filters on their cheeks, making them extraordinarily good at sensing their sonic altitude distances. You would be good at sensing your sonic altitude distance, too, if you had to catch and eat, from the air, rapidly running field mice. So if its not a frisbee heading at you more or less in the two-dimensional plane, but a softball headed straight down toward your head, wed suggest a helmet!
|<-- Back to Previous Page||Next Section -->|
©Burk/Polansky/Repetto/Roberts/Rockmore. All rights reserved.
|
By: Divya Bora
June 17, 2021
Overview of Active Directory
By: Divya Bora
June 17, 2021
Active Directory (AD) was introduced as a part of Microsoft Windows Server 2000 in 1999. It is Microsoft’s proprietary directory service which is based on Lightweight Directory Access Protocol(LDAP). It enables the administrator access to manage permissions and access to network resources. AD stores data in the form of objects, and an object represents a single element like the user, group, device, or application. Objects, generally, are defined as resources like computers, printers, or security principles like groups or users. In addition, AD categorizes directory objects by their attributes and names.
AD makes use of a hierarchical structure to organize the data. The main components of this structure are:-
- A domain represents a group of objects like users, groups, or devices sharing the same AD database. It has a similar structure to standard domains and subdomains to be thought of as a branch in a tree. A domain is a partition in an AD forest, and partitions enable the user to replicate data where it is needed. Domains are also defined as logical directory components created to manage the administrative requirements of the organization.
- Trees are grouped domains. A contiguous namespace is used to gather the collection of domains in a logical hierarchy. They have a trust relationship as a secure connection is shared between two domains, and similarly, multiple domains within a tree trust each other. Due to the logical hierarchy, the first domain implicitly trusts the third domain, and no explicit trust is required.
- Forest is the highest level of organization within an AD and is defined as a group of trees. This consists of shared catalogs, global catalogs, application information, directory schemas, and domain configurations. The object class and attributes of the forest are defined in the schema, and the global catalog lists all the objects of a forest. The forest acts as the AD’s security boundary.
- Organizational Unit: This is used to organize the users, computers, and groups. Each domain can have a separate OU. OU’s are not allowed to have separate namespaces(a namespace is a set of signs used to refer to and identify various objects) as each object or user in a domain should be unique.
- Containers: are similar to OUs, but unlike them, it is impossible to link a Group Policy Object(GPO) to a generic AD container object.
TYPES OF AD
The various types of Active Directory are:-
1. Active Directory Domain Services (AD DS)
This is the most classic on-premise AD (which means that the authentication infrastructure is running on the in-house hardware) and is used to authenticate and authorize functions for the users/computers within an organization. Unfortunately, it relies upon computers permanently connected to a domain and protocols for directory querying and authentication, which is not suitable for the modern internet-centric environment.
2. Azure Active Directory(AAD)
This is a version of directory services in the cloud and is hosted on Microsoft Azure. It consists of distinct features and capabilities compared to Windows Server Active Directory(AD) as its AAD’s primary function is to manage the variety of users and devices used. In addition, AAD is capable of authenticating and authorizing mechanisms not only for Azure but also for Office 365, Intune, and numerous other third-party authentication systems.
3. Hybrid Azure AD(Hybrid AAD)
Hybrid Azure AD is used to achieve one identity when the user requires data synchronization between their Azure Active Directory and their local on-premise AD. So the user doesn’t need two sets of credentials; instead, they can add an “onsite” domain controller to replicate the Azure AD using Azure AD Connect. The company has two options:
a) To keep the “on-premise” domain controller within their physical location and use AD Connect to synchronize their users and passwords with Azure AD
b) To move the existing “on-premise” domain controller to an Azure virtual machine and use AD Connect with Azure AD to create a VPN connection between their organization and the Azure Datacenter(where the domain controller is hosted).
4. Azure Active Directory Domain Services(AAD-DS)
This standalone service enables a domain controller for Azure virtual machines instead of setting up a standalone server. It syncs users, groups, and passwords from Azure AD to the virtual machines that are a part of Azure’s network. Alternatively, one can use Active Directory Administrative Center or Active Directory PowerShell to administer domains with AAD-DS.
AD comprises multiple services, but the most prominent one is Domain Services. The various other services supported by AD are:
- Lightweight Directory Access Protocol(LDAP) is an application-level protocol used to access and maintain the directory services over the specified network. It enables storing objects like passwords and usernames in the directory services and sharing them across the network.
- Rights Management Services(RMS) are used to control the management and information rights. AD RMS limits access on the server by encrypting content sent over mails or Microsoft Word documents.
- Certificate Services are used to generate, manage and share certificates. The certificate is encrypted with a public key to ensure the secure exchange of information over the internet.
- Lightweight Directory Services(LDS) comprises a similar codebase as AD Domain Services(DS) and hence shares similar functionalities like the Application Program Interface(API). However, AD LDS can run multiple instances on a single server and contains directory data within a data store using LDAP.
- Active Directory Federation Services are used to authenticate user access to multiple applications even if they are on distinct networks using the Single Sign-On(SSO) functionality.
BENEFITS OF ACTIVE DIRECTORY
Some major benefits of Active Directory are:
- Centralized Data Repository: Active Directory consists of a multi-master database used to store the identity information of its users, applications, and resources. The database is in the form of a file, and it is known as ntds.dit. This AD database utilizes the Joint Engine Technology(JET) database engine and can store 2 billion objects. Alternative domain controllers can be used to modify the data stored in the ntds.dit database. Users can make use of the identity data stored in AD from anywhere in the network. Administrators can authenticate and authorize the organization’s identities from a centralized location.
- Querying and Indexing: This allows users and applications to query objects and retrieve accurate data.
- Single sign-on: Most application vendors support the integration with AD for authentication. Once the user authenticates on their system, the same session will authenticate other AD integrated applications.
- Replication of Data: Generally, organizations have multiple domain controllers, and each domain controller should be aware of the changes made to the AD database. There are two types of replications supported by Microsoft AD, i.e., inbound and outbound. When a domain controller accepts changes that neighboring domain controllers advertise, it is called inbound replication. When a domain controller accepts changes made on a particular domain controller to neighboring domain controllers, it is called outbound replication.
- Security: Data and identity security are crucial parts of modern-day businesses, and AD features help secure the identity infrastructures from any emerging threats. It enables the user to implement different authentication types, workflows, and group policies to protect the network resources and application data. Administrators can build various security rules based on their requirements, forcing individuals to abide by the organizational data and network security standards.
- High Availability: This is important for any critical business system in an organization and is implemented by domain controllers. They are built-in with fault tolerance capabilities, and so they do not require any additional software or hardware changes to implement high availability like other systems. A multi-master database followed by replicating domain controllers allows users to authenticate and authorize from any available domain controller anytime.
- Auditing Capabilities: Periodic audits help to understand the new security threats. AD allows the capturing and auditing of occurring events in the identity infrastructure.
- Schema Modification: The AD database has a schema that describes all its objects. It can be modified or extended. This is very important for integrated applications.
Microsoft AD Domain Service is a course specifically designed to strengthen the basics of AD for a beginner. An Overview of AD will provide a complete summary of AD and make the topics covered in this article clearer.
- What is Active Directory Domain Services? | JumpCloud Video (Image 1)
|
Yes, PTSD is a mental condition. It is a serious and complex disorder that occurs after experiencing or witnessing a traumatic event such as military combat, natural disasters, terrorist incidents, sexual violence, car accidents, or other life-threatening events. Symptoms of PTSD can include flashbacks to the traumatic event, avoidance of reminders of the trauma (including people and places associated with it), intrusive thoughts about the trauma, increased arousal (such as difficulty sleeping or hypervigilance), and changes in mood such as depression or anxiety. Treatment for PTSD includes psychotherapy, medications such as antidepressants or antianxiety drugs, lifestyle changes such as mindfulness techniques or yoga practice, support groups for individuals who have experienced similar traumas.
The Definition of PTSD
Post-Traumatic Stress Disorder (PTSD) is an anxiety disorder that can occur in anyone after experiencing or witnessing a traumatic event. It is often characterized by flashbacks, intrusive memories and nightmares of the event, hypervigilance, avoidance behaviors such as staying away from places, people or activities which remind one of the trauma, irritability and difficulty concentrating. A person with PTSD may also have issues with sleep disturbances and physical symptoms such as headache or chest pain. People can also experience feelings of depression and guilt associated with their trauma.
Though it is not always easy to identify who has PTSD since reactions to traumatic events vary greatly from individual to individual, certain criteria must be met for a diagnosis to be made. Symptoms must persist for at least one month after the event occurred and cause significant distress in areas such as work performance, social functioning or overall quality of life. These symptoms cannot be attributed to any other medical condition or substance abuse disorder.
It’s important for individuals experiencing persistent anxiety related to their trauma to seek help from a mental health professional because treatment methods are available that can greatly reduce symptoms and improve overall wellbeing. Treatment typically involves both psychotherapy techniques such as Cognitive Behavioral Therapy (CBT), Exposure Therapy (ET) and Eye Movement Desensitization Reprocessing (EMDR). As well as medications like Selective Serotonin Reuptake Inhibitors (SSRIs) or benzodiazepines if necessary. With guidance from healthcare professionals it’s possible to regain balance in life through effective management of PTSD symptoms.
Symptoms and Diagnosis of PTSD
Post Traumatic Stress Disorder (PTSD) is a complex mental condition that can result from experiencing or being exposed to a traumatic event. It is characterized by intrusive memories, flashbacks, avoidance of reminders associated with the trauma and intense emotional and physical reactions when confronted with such memories. In order for one to be diagnosed with PTSD, they must exhibit persistent symptoms for more than a month after the initial incident occurred.
One of the most common signs of PTSD is re-experiencing the traumatic event in intrusive ways including nightmares, emotions such as fear and despair which may lead to panic attacks, or thoughts and images that cause distress. Avoidance behaviour includes actively avoiding activities, places or people that are associated with the trauma such as trying to numb any feelings related to it. Individuals also experience emotional hyperarousal where they might feel on edge often leading them to feeling jumpy or having difficulty sleeping.
The diagnosis of this mental condition relies mainly on self-report measures but there are some instances where clinicians use structured interviews along with questionnaires designed specifically for this purpose. If an individual was previously diagnosed with another mental disorder prior to their exposure then medical professionals may have additional tasks or tests during their evaluation process so as better assess whether this new issue may be linked to their previous problem.
PTSD as a Medical Condition
Post-Traumatic Stress Disorder (PTSD) is a medical condition that can affect anyone who has experienced an event or situation that was emotionally traumatic. It can cause fear and anxiety, often resulting in difficulty functioning on a daily basis, including symptoms such as flashbacks, nightmares and avoidance of situations related to the trauma. Those living with PTSD may also experience depression, substance abuse and anger management issues.
The diagnosis of PTSD requires careful evaluation by a mental health professional. To be diagnosed with the disorder, individuals must have experienced or witnessed an event that caused extreme fear, helplessness or horror; reexperienced memories of the trauma through recurring thoughts or flashbacks; avoided any associated people, places and objects linked to the event; as well as demonstrate distress at reminders of the trauma. Diagnostic criteria also includes increased arousal levels manifesting through difficulty sleeping, irritability and trouble concentrating. One must have been experiencing these symptoms for over one month in order for their condition to be classified as PTSD.
In terms of treatment options for those with PTSD there are several effective options available today such as psychotherapy techniques like Cognitive Behavioral Therapy (CBT), Eye Movement Desensitization & Reprocessing (EMDR) which utilizes cognitive exercises targeting thought patterns to help reduce symptom intensity; medication usually prescribed antidepressants from SSRI class drugs – selective serotonin reuptake inhibitors (SSRIs); lifestyle changes focusing on better self care practices such eating healthy diet rich in omega fatty acids and exercising regularly along mindfulness practice aimed at helping develop more present awareness while dealing with disturbing emotions associated with post-traumatic stress.
Treatment Options for PTSD
The complexity of PTSD makes it difficult to treat, but there are various options available. One of the most commonly prescribed treatments is cognitive behavioral therapy (CBT), which helps individuals focus on changing negative patterns of thinking and behavior that can trigger symptoms. This type of therapy may involve problem solving, relaxation strategies, and exposure exercises that slowly introduce a person to the experience that caused their condition. Medications like selective serotonin reuptake inhibitors (SSRIs) can help control anxiety levels as well as manage flashbacks and nightmares associated with PTSD. Other medication choices for symptom relief may include anti-anxiety drugs and antidepressants.
One alternative approach to treating PTSD is eye movement desensitization reprocessing (EMDR). This form of psychotherapy uses bilateral stimulation through side-to-side eye movements or sound or tactile cues to reduce distressing memories associated with trauma. It has been shown to be effective in addressing emotional distress related to traumatic experiences by inducing rapid brain activity across multiple areas involved in forming memories and making connections between them.
Complementary therapies such as meditation, yoga, massage therapy, art therapy or journaling may also be beneficial for reducing stress levels related to PTSD by promoting self-awareness and relaxation while releasing emotions stored within your body’s tissues and energy fields. Not only do these practices offer valuable tools for healing after a traumatic event but they also empower you with insight into how your body is holding onto painful memories so you can create new more positive ones going forward.
Living with PTSD: Coping Strategies and Support Systems
For many who live with Post Traumatic Stress Disorder (PTSD), managing their condition requires a carefully crafted approach of support systems and healthy coping strategies. Triggers, or reminders, that cause the individual to experience an intense feeling of distress or danger can come from seemingly random activities or environmental cues. For instance, smelling smoke on a walk may unexpectedly bring back a traumatic memory of being in an inferno-like scene. In those instances it is important for individuals living with PTSD to have the right plan in place so they can remain mindful and avoid slipping into reactions and feelings that impede progress towards recovery.
One way to cope with triggers is by utilizing forms of relaxation such as yoga, mindfulness exercises and breathing techniques which all serve to help regulate emotions when memories threaten to overwhelm one’s emotional state. Forming a trusted bond with someone you feel comfortable confiding in allows you to talk through your thoughts without fear judgement. Relying on close friends and family members for emotional validation can provide comfort during moments where PTSD resurfaces after years of remission. It is essential for individuals living with the disorder to find balance between solitude time so they can gain clarity over their thoughts, while also having access to social outlets who will lend supportive ears and words during difficult periods.
Moreover, seeking professional support like therapy sessions has proven highly effective at helping individuals mitigate symptoms associated with PTSD through cognitive behavior therapy (CBT) which focuses on understanding irrational thinking patterns while also increasing adaptive behaviors like problem solving skills by unlearning bad habits from past traumas experienced by patients enduring challenging scenarios along the journey towards achieving healing resolution. All these methods should be considered when attempting to masterfully maintain one’s mental wellness during times when dealing wth PTSD becomes especially hard as there are countless available resources more than capable of providing effective relief from this particular mental disorder.
The Stigma Surrounding Mental Health Conditions
Mental health conditions, like post-traumatic stress disorder (PTSD), are still subject to stigma and discrimination. This is especially true when discussing mental health issues openly in public forums such as social media or an online platform. Unfortunately, many people can be quick to judge those who have faced traumatic events and equate PTSD with a lack of strength or character. Mental illness should not be overlooked nor dismissed as nothing more than weakness.
People with PTSD may experience feelings of guilt that they don’t understand which in turn lead to low self-esteem, loneliness, depression and anxiety. Without proper support systems and resources, individuals with PTSD can feel isolated from their families, friends and communities as if no one understands what they’re going through or the struggles that come along with living with a mental condition.
It’s important for all members of society to recognize the seriousness of psychological conditions such as PTSD; understanding that these illnesses affect everyone differently yet equally deserves acknowledgement by all so the stigma surrounding mental health won’t remain unchallenged any longer. It’s time for society to create a safe environment where those affected by psychological disorders are heard rather than silenced due to shame or fear attached associated with seeking help for one’s mental state.
Future Research on PTSD
It is undisputed that post-traumatic stress disorder (PTSD) can be devastatingly debilitating, and it is important that medical professionals continue to conduct research so as to better understand the condition and its treatments. Advances in neuroscience technology are leading researchers towards new methods for developing a greater understanding of PTSD.
Brain imaging techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are allowing scientists to assess how different parts of the brain react while individuals suffering from PTSD perform certain tasks or experience particular sensations. Through these measures, researchers have identified specific areas of the brain that seem to become especially active when those with PTSD recall their traumatic events. By studying these neurological pathways further, researchers may begin to identify more effective treatment options for dealing with PTSD.
Neurochemical studies are another line of inquiry which has potential applications for treating symptoms associated with PTSC In one study published by Nature Neuroscience, researchers examined neurotransmitter levels in mice who had been subjected to trauma simulations; they found remarkable elevations in the animals’ serotonin levels compared to non-traumatised mice, which suggests an association between this chemical compound and manifestation of PSTD. This kind of research could help medical practitioners develop treatments centred around modulating serotonin production or usefulness within those experiencing symptoms related to psychological distress following traumatic experiences.
|
7 Calming and Creative Mindfulness Activities for Kids
Mindfulness has been shown to alter brain structure and function in the amygdala (emotions), hippocampus (learning and memory), and prefrontal cortex (planning and decision-making) (self-regulation). These domains are critical for a child's cognitive, social, and emotional development and well-being. It will help to develop a calm child. It is unquestionably beneficial to teach a youngster mindfulness.
Creative mindfulness activities may help your children live more balanced lives if they learn them early. Kids who practice creative mindfulness activities become more conscious of their emotions, and their ideas become more present. Parents may assist their children in developing their capacity to live in the present and lead more appreciative lives by adapting calming strategies for kids.
So, how do you persuade your kid to practice mindfulness activities for children when getting them to sit still is a mammoth feat in itself? Make it seem like they're both playing and doing the activity simultaneously. Here are seven creative mindfulness activities for kids of all ages to start with:
Posing with Intention
Body postures are a simple approach for youngsters to practice mindfulness activities for children. Tell your kids that practicing interesting positions may make them feel strong, bold, and joyful to get them enthusiastic. Take the children to someplace peaceful and comfortable, where they will feel protected. Then urge them to attempt one of the positions below:
· Standing with feet just wider than hips, fists clenched, and arms stretched up to the sky, the Superman posture gets performed by extending the body as tall as possible.
· Wonder Woman strikes this position by standing tall with legs wider than hip-width apart and hands or fists on the hips.
After a few rounds of attempting any of these stances, ask the kids how they feel. You could be pleasantly surprised.
While we're on the topic of superheroes, this may be a good "next step" in teaching kids to be a calm child.
· Instruct your children to use their "Spidey senses," which are the super-focused senses of smell, sight, hearing, taste, and touch that Spiderman employs to keep track of his surroundings. This will urge them to take a breath and concentrate their attention on the current moment. It will become a mindful moment for kids, making them more aware of their senses' information.
These basic mindful and calming strategies for kids promote observation and curiosity, two critical human abilities.
Mindful Glitter Jar
This calming strategy for kids educates them on how intense emotions can take hold and find serenity when they feel overwhelmed.
· First, fill a transparent jar nearly to the top with water. After that, fill the jar with a large spoonful of glitter. To make the glitter whirl, replace the lid on the jar and shake it.
· Finally, tell the child about it.
"Imagine that the glitter represents your anxious, angry, or disturbing thoughts. Notice how they twirl about, making it difficult to see clearly? That's why, when you're unhappy, it's so easy to make rash judgments because you're not thinking correctly. Don't worry; this is quite normal and occurs to everyone."
· Place the jar in front of them now.
· Now see what happens if you remain stationary for a few seconds. Keep an eye on things. Take a look at how the glitter settles, and the water clears. In the same manner, your mind functions. After a short period of peace, your mind begins to relax, and you begin to see things more clearly. When we're going through this relaxing process, taking deep breaths might help us quiet down when we're feeling a lot of emotions.
Deep Breathing with Intention
Deep breathing is an excellent approach to calm kids, manage their emotions, and connect with their bodies in the present moment. If your kid suffers from anxiety, focusing on attentive deep breathing for a few minutes each day might help them develop self-control and reduce tension.
· To begin, have your child take a deep breath in, hold it for a few seconds, and then release it gently.
· Encourage them to repeat this for 3 to 5 minutes after they've gotten the hang of it. It may seem straightforward, but encouraging young toddlers to concentrate is difficult.
This is one of the best mindfulness activities for children, ideal for doing outside or in your garden.
· Close your children's eyes and have them walk over diverse textures such as grass or damp sand.
· Inquire about their feelings: Is the air dry? Is it chilly outside? Is it supple?
It helps youngsters calm down their thinking and enhance their gross motor abilities by concentrating on what they feel in one portion of their body.
Taste Tests While Blindfolded
· Place a blindfold over your child's eyes and ask them to take a little mouthful of food, such as chocolate or pineapple.
· Request that they explain the food's texture and flavor in detail.
This activity improves their descriptive abilities and teaches kids how to concentrate on one task at a time.
The Safari activity is an excellent approach to teach mindfulness to children and to calm child. These creative mindfulness activities transform a routine stroll into a thrilling new experience.
· Tell your kids you're going on a safari and that their mission is to see as many birds, bugs, creepy-crawlies, and other creatures as possible. Anything that walks, crawls, swims, or flies will pique their curiosity.
· They'll need to utilize all of their senses to locate it, particularly the little ones.
The Bottom Line
Mindfulness activities for children may have a long-term influence on their psychological, social, and cognitive well-being. Students' capacity to handle stress may get aided by mindfulness meditation, which may also lead to a greater sense of well-being. Applications such as Calm Kids are ideal for providing a mindful moment for kids, which is essential for a child's good mental and physical development.
Just download the Calm Kids app on your phone and start doing mindfulness exercises with your children. This app will help your child raise their well-being and empower them to confront the world's pressures with the present, self-compassion, and openness by teaching them meditation and mindfulness methods.
|
What is citation? Why does it matter?
You are currently in the module on "Citations" in a larger tutorial. Each research tutorial includes modules of topics related to the overall tutorial learning objectives. Please go through all the pages in this module by clicking on the “Next” button on the bottom of the page in order to progress. If you would like to track your progress, be sure to log in with your UNCG credentials at the top right of the module. Each module includes Quick Checks on every page. These Quick Checks do not produce a certificate; they are optional and do not track your progress. Certificates are created by completing a whole tutorial, so be sure to complete all the modules within a tutorial in order to generate a certificate. You can also take a screenshot of your progress page.
UNCG Libraries Research Tutorials Help
Time needed to complete this module: 10 minutes
- Students will value the intellectual property of information creators and use sources ethically.
- Students will apply the citation style appropriate to their discipline.
Merriam-Webster’s Dictionary defines citation as “an act of quoting.” While that’s part of what citation means in an academic context, there’s more to it than quoting other people. When we say citation in college, we mean giving credit to others for the work they have done.
We cite our sources for 3 main reasons:
- To credit to the ideas of others
It's important to give credit when you're using other people's ideas. Pretending other people's ideas are your own original thoughts is called plagiarism, and it can have severe academic consequences.
- To help readers find our sources
Readers might want to find more information about the topic you're writing about. By citing your sources, you can lead them to credible relevant sources of further information.
- To support our conclusions
You're probably not an expert on your topic yet, but by citing the ideas and thoughts of people who are, you give more credibility to your own conclusions.
Citation is also important because it is the policy of the University that all students must follow the Academic Integrity Policy, which requires the use of citations to avoid plagiarism.
|
In Computing this term the children in Keystage 2 have been using safe searching skills to find information and to understand that the information that they may find online is not always correct. They have looked at ways of finding the most accurate data that they can. Year's 5 & 6 have had open floor sessions on 'Internet Safety' during PSHCE and then created presentations on the 'Pros and Cons' of the Apps that they are using on a regular basis. They have discussed why it is important not to post personal information online and that only suitable images with permission should be posted. In groups the children have discussed the consequences and what to do if they should feel uncomfortable with anything that they see or hear online.
Year 3/4 children have started to develop their ICT skills in making presentations about themselves.
Autumn Term Two has seen the children in Keystage 2 developing their programming skills. They have coded toys, all singing and dancing Christmas trees and Santas flying across a night sky. The children in Year 5 have started to code using HTML and the Year 6 children have begun 'Unit 2' in HTML which follows on from a previous lesson in Year 5.
|
Students’ age range: 10-12
Main subject: Language arts and literature
Topic: Understanding words and the different sound they make when the come together
Description: • Pupils will listen to a short passage with several “ph” and “f” sounding words. They will tell one unique sound that they heard repeating in the passage.
• Pupils will repeat this sound. They will give the letters that gives this unique sound.
• Pupils will be given word cards (strips) with the consonant “f” words and the diagraph “ph”words.
• For example:
• Pupils will say each word and identify the initial sound heard. For example: The “f” sound.
• Through guided discussion, pupil will understand that the digraph ‘ph’ gives an ‘f’ sound.
• Pupils will list all the ‘ph’ and ‘f’ words that were mentioned in the passage (Phillip, phone, pharmacy, wife, float, fish, father, shelf, photograph etc).
• Pupils will give other ‘ph’ and ‘f’ words that they know or see around the classroom. These words will be written on the board highlighting the initial, medial and final sound.
• Pupil will then look at a picture from their work sheet and match the picture with the correct ‘ph or ‘f’ words given.
• Teacher will show pupils a video with the ‘ph’ and ‘f’ words to further concretize the lesson.
• In small groups, with the guidance of teacher pupils will create a rhyme, jingle or a poem with atleast ten ‘ph’ and ‘f’ words. They will then perform it to the class the best one will be the winner.
• Pupils will create ten sentences ensuring:
1. Two initial ‘ph’ and two ‘f’ sentences
2. Two medial ‘ph’ and two ‘f’ sentences
3. One final ‘ph’ and one ‘f’ sentences
|
Moran’s I is a way to measure spacial autocorrelation.
In simple terms, it’s a way to quantify how closely values are clustered together in a 2-D space. It’s often used in geography and geographic information science (GIS) to measure how closely clustered different features are on a map like household income, level of education, etc.
Moran’s I: The Formula
The formula to calculate Moran’s I is:
I = (N/W)*ΣΣwij(xi–)(xj– )/Σ(xi– )2
- N: The number of spacial units indexed by i and j
- W: The sum of all wij
- x: The variable of interest (household income, years of schooling, etc.)
- : The mean of x
- wij: A matrix of spacial weights
You’ll likely never have to calculate this measure by hand since most statistical software can calculate it for you, but it’s useful to know the formula being used under the hood.
The value for Moran’s I can range from -1 to 1 where:
- -1: The variable of interest is perfectly dispersed
- 0: The variable of interest is randomly dispersed
- 1: The variable of interest is perfectly clustered together
Along with computing Moran’s I, most statistical software will compute a corresponding p-value that can be used to determine whether or not the data is randomly dispersed or not.
Moran’s Test uses the following null and alternative hypotheses:
Null Hypothesis (H0): The data is randomly dispersed.
Alternative Hypothesis (HA): The data is not randomly dispersed, i.e. it is clustered in noticeable patterns.
If the p-value that corresponds to Moran’s I is less than a certain significance level (i.e. α = .05), then we can reject the null hypothesis and conclude that the data is spatially clustered together in such a way that it is unlikely to have occurred by chance alone.
Moran’s I: A Few Examples
The following examples represent fake maps with different values for Moran’s I.
Assume that each square in the map represents a county and that counties with average household incomes greater than $50k are shown in blue.
Moran’s I = 0: Average Household income is randomly dispersed (i.e. random clusters in random areas).
Moran’s I = -1: Average Household income is perfectly dispersed.
Moran’s I = 1: Average Household income is perfectly clustered.
Refer to this example for a real-world example of computing Moran’s I in the statistical software R.
|
Carbon capture and storage (CCS) has emerged as a promising means of lowering CO2 emissions from fossil fuel combustion. However, concerns about the possibility of harmful CO2 leakage are contributing to slow widespread adoption of the technology. Research to date has failed to identify a cheap and effective means of unambiguously identifying leakage of CO2 injected, or a viable means of identifying ownership of it. This means that in the event of a leak from a storage site that multiple operators have injected into, it is impossible to determine whose CO2 is leaking. The on-going debate regarding leakage and how to detect it has been frequently documented in the popular press and scientific publications. This has contributed to public confusion and fear, particularly close to proposed storage sites, causing the cancellation of several large storage projects such as that at Barendrecht in the Netherlands.
One means to reduce public fears over CCS is to demonstrate a simple method which is able to reliably detect the leakage of CO2 from a storage site and determine the ownership of that CO2. Measurements of noble gases (helium, neon, argon, krypton and xenon) and the ratios of light and heavy stable isotopes of carbon and oxygen in natural CO2 fields have shown how CO2 is naturally stored over millions of years. Noble gases have also proved to be effective at identifying the natural leakage of CO2 above a CO2 reservoir in Arizona and an oil field in Wyoming and in ruling out the alleged leakage of CO2 from the Weyburn storage site in Canada.
Recent research has shown amounts of krypton are enhanced relative to those of argon and helium in CO2 captured from a nitrate fertiliser plant in Brazil. This enrichment is due to the greater solubility of the heavier noble gases, so they are more readily dissolved into the solvent used for capture. This fingerprint has been shown to act as an effective means of tracking CO2 injected into Brazilian and USA oil fields to increase oil production. Similar enrichments in heavy noble gases, along with high helium concentrations are well documented in coals, coal-bed methane and in organic rich oil and gas source rocks. As noble gases are unreactive, these enrichments will not be affected by burning the gas or coal in a power station and hence will be passed onto the flue gases. Samples of CO2 obtained from an oxyfuel pilot CO2 capture plant at Lacq in France which contain helium and krypton enrichments well above atmospheric values confirm this.
Despite identification of these distinctive fingerprints, no study has yet investigated if there is a correlation between them and different CO2 capture technologies or the fossil fuel being burnt. We propose to measure the carbon and oxygen stable isotope and noble gas fingerprint in captured CO2 from post, pre and oxyfuel pilot capture plants. We will find out if unique fingerprints arise from the capture technology used or fuel being burnt. We will determine if these fingerprints are distinctive enough to track the CO2 once it is injected underground without the need of adding expense artificial tracers. We will investigate if they are sufficient to distinguish ownership of multiple CO2 streams injected into the same storage site and if they can provide an early warning of unplanned CO2 movement out of the storage site.
To do this we will determine the fingerprint of CO2 captured from the Boundary Dam Power Plant prior to its injection into the Aquistore saline aquifer storage site in Saskatechwan, Canada. By comparing this to the fingerprint of the CO2 produced from the Aquistore monitoring well, some 100m from the injection well, we will be able to see if the fingerprint is retained after the CO2 has moved through the saline aquifer. This will show if this technique can be used to track the movement of CO2 in future engineered storage sites, particularly offshore saline aquifers which will be used for future UK large volume CO2 storage.
|
Briefly, I will tell you the layers and functions used in general PCB copying. (The following colors are default and can be modified by yourself)
The line drawn by the TopLayer (top layer) is red, which is the upper layer of the general double-sided panel, and the single panel can not use this layer
The line drawn by the Bottom Layer is blue, which is the line layer above the single-sided board.
MidLayer1 This is the first intermediate layer. It seems that there are 30 layers. Generally, PCB designers cannot use it. You don't need to worry about it. It is used when there are multiple panels. It is not displayed in 99SE by default. Not really.
Mechanical Layers (magenta) are used to mark the size. The board description is ignored during PCB copying, that is, the board is invisible when it is made. The meaning of simple punctuation notes.
Top Overlay (yellow) is the character on the front of the board, which corresponds to the TopLayer single panel. Bottom Overlay (brown) corresponds to the BottomLayer, which corresponds to the character on the back of the board. The two layers of characters are used when double panels are used.
KeepOutLayer (the same as the mechanical layer) is simply the frame and appearance of the board.
Multi layer (silver) All wiring layers include single and double side plug-in pads, and the spline lines are drawn on all layers.
1. TopLayer (top layer) The top wiring layer is used to draw the electrical connection lines between components.
2. BottomLayer (bottom layer) The bottom wiring layer acts on the top wiring layer.
3. MidLayer1 (middle layer 1) is used to draw electrical connection lines on this layer when making multilayer boards, but the cost of multilayer boards is relatively high.
4. Mechanical Layers can be used to draw the shape of PCB and the parts to be hollowed, or to annotate PCB dimensions. It can be used to draw the shape of PCB and the parts to be hollowed. Note that PCB shapes, hollowed parts and PCB annotation dimensions do not use the same mechanical layer. For example, mechanical layer 1 is used to draw PCB shapes and hollows, and mechanical layer 13 is used to annotate dimensions, After separation, the technicians of the PCB manufacturer will analyze whether it is necessary to make this layer according to its contents.
5. Top Overlay (yellow) is the character on the front of the board, which corresponds to the TopLayer single panel. Bottom Overlay (brown) corresponds to the BottomLayer, which corresponds to the character on the back of the board. The above two layers of characters are used when double panels are used.
6. KeepOutLayer is used to draw the forbidden wiring area. If there is no mechanical layer in the PCB, the PCB manufacturer will use this layer as the PCB shape. For example, in the case of both KEEPOUT LAYER layer and mechanical layer, the mechanical layer is used as the PCB shape by default, but the PCB manufacturer's technicians will distinguish it by themselves, but if they can't distinguish it, they will use the mechanical layer as the shape layer by default.
7. All wiring layers of the Multi layer (silver) include single and double sided plug-in pads. The lines are drawn on all layers.
1、 Signal Layers
Protel98 and Protel99 provide 16 signal layers: Top, Bottom and Mid1-Mid14.
The signal layer is the wiring layer used to complete the copper foil routing of the printed circuit board. When designing double-sided boards, only the top layer and bottom layer are generally used. When the number of printed circuit boards exceeds 4, Mid (intermediate wiring layer) is required.
2、 Internal Planes
Protel98 and Protel99 provide Plane1-Plane4 (4 internal power supplies/ground planes). The internal power/ground plane is mainly used for more than 4 layers of printed circuit boards as the special wiring layer for power and grounding, and the double-sided board does not need to be used.
3、 Mechanical Layers
The mechanical layer is generally used to draw the border (boundary) of the printed circuit board, and usually only one mechanical layer is used. There are Mech1-Mech4 (4 mechanical layers).
4、 Drkll Layers
There are two layers: "Drill Drawing" and "Drill Guide". Used to draw the hole diameter and the location of the hole.
5、 Solder Mask
There are 2 layers: Top and Bottom. The protective area around the pad and via on the printed circuit board when drawn on the solder mask.
6、 Paste Mask
There are 2 layers: Top and Bottom. The solder paste protective layer is mainly used for printed circuit boards with surface mounted components. This layer is required by the installation process of surface mounted components, and it is not required when there is no surface mounted components.
There are 2 layers: Top and Bottom. The silk screen layer is mainly used to draw text descriptions and graphic descriptions, such as outline, label and parameters of PCB components.
There are 8 layers in total: "Keep Out", "Multi Layer", "Connect", "DRC Error", 2 "Visible Grid", "Pad Holes" and "Via Holes". Some of these layers are used by the system itself. For example, the Visible Grid is designed to facilitate designers' positioning during drawing. The Keep Put layer is used for automatic routing, and manual routing is unnecessary.
For manual drawing of double-sided printed circuit boards, the top layers, bottom layers and top silk screen are used most frequently. Each layer can choose its own color. Generally, red is used for the top layer, blue is used for the bottom layer, green or white is used for text and symbols, and yellow is used for pads and vias.
Top layer and bottom layer: they are the top layer and bottom layer respectively, that is, the upper layer and the lower layer of the circuit board surface. Usually, signal lines are arranged on the top, such as double-layer boards. For multilayer boards, signal layer wiring can also be added in the middle.
Mechanical layer: defines the appearance of the entire PCB, that is, the overall structure of the PCB.
Keepoutlayer keepoutlayer: defines the boundary on the copper side of the cloth electrical characteristics. That is to say, after the forbidden wiring layer is defined first, the wires with electrical characteristics laid in the subsequent wiring process cannot exceed the boundary of the forbidden wiring layer.
Topoverlay top silk screen layer
Bottom overlay bottom silk screen layer: defines the top and bottom silk screen characters, which are the component numbers and some characters commonly seen on PCB boards.
Toppaste top pad layer
Bottompaste bottom bonding pad layer: refers to the copper and platinum exposed outside.
Topholder top layer
Bottomsolder bottom layer: opposite to toppaste and bottompaste, it is the layer to be covered with green oil.
Drillguide via guide layer
Drilldrawing through hole drilling layer
Multiplayer: refers to all layers of PCB.
1. TopLayer component layer, BottomLayer wiring and plug-in component welding layer, MidLayerx intermediate layer, these layers are used to draw wire or copper coating (of course, there is also the pad PAD of SMT chip devices of TopLayer and BottomLayer);
2. Top Solder, Bottom Solder, Top Paste and Bottom Paste are related to the device PAD passing through more than two layers; Generally, the holes left in the Paste layer will be smaller than the pads (Paste surface means the solder paste layer, which means that it can be used to make the steel mesh for printing solder paste. This layer only needs to expose all pads that need to be pasted and welded, and the holes may be smaller than the actual pads); Then, brush green oil (solder mask) on the PCB board. This is the Solder layer. The Solder layer should expose the PAD. This is the small circle or square circle we see when only the Solder layer is displayed, Generally, it is larger than the PCB pad (Solder surface means a solder mask layer, which is used to coat solder mask materials such as green oil, so as to prevent solder from contaminating areas that do not need to be welded. This layer will expose all pads that need to be welded, and the hole will be larger than the actual pad); These layers are generally yellow (copper) or white (tin);
3. Top Overlay, Bottom Overlay, silk screen layer, characters or resistance capacitance symbols or device borders on PCB surface, generally white;
4. Keep out, draw a border to determine the electrical boundary;
5. The Mechanical layer, the real physical boundary, and the positioning hole are made according to the size of the Mechanical layer, but the PCB factory engineers generally do not understand this. Therefore, it is better to delete the keepout layer layer before sending it to the PCB factory;
6. Multi Layer, which runs through all layers, such as vias (VIAs of the bottom layer or top layer also have Solder and Paste).
|
WARHUS’ PRIMARY INTEREST in the Velasco Map of 1610/11 is its legend
All the blue is dune by the relations of the Indians.
as this makes the Velasco Map “the oldest recorded map to acknowledge its Native American contribution.” As glossed by Warhus:
Often called the “Velasco” map because it was sent to Spain by Don Alonso de Velasco, the Spanish Ambassador to London, this is a Spanish copy of an English map. The entire map shows the coast of North America from Newfoundland to Cape Fear and a generalized interior indicating the limited extent of European knowledge. This detail shows that the mapmaker relied upon Indian information for some of the area shown. “All the blue is done by the relations of the Indians,” has been written in the northeast. It is the oldest recorded map to acknowledge its Native American contribution. Within the land now claimed by Europeans, the areas depicted on the basis of native information include lakes Champlain and George, the upper St. Lawrence, the Susquehanna River, Lake Ontario, and unknown features to the west.
Warhus includes the above detail from the Velasco Map in his chapter 4, “The Remapping of America.” Here he emphasizes that while such maps may “carry the traces of the Native American traditions that originally shaped the land,” they are also emblematic of a significant process of transformation.
“The Remapping of America” describes both the cartographic and the physical transformation of North America, from a continent that was conceived, experienced, and “mapped” within the traditions and cultures of Native Americans, to a continent owned, occupied, and pictured as part of western culture. Throughout this process western persons relied upon the geographic information they received from Native Americans. This information was often appropriated and then translated onto western maps where it was used to fill in the details of a land now claimed in the name of western empires and nation states. The maps presented here focus on the Native American experience of these events.
In the seventeenth and eighteenth centuries explorers and colonists took over the coasts and limited sections of the interior, displacing native populations with often devastating consequences. But it was not until the late eighteenth century that the wholesale transformation of the continent’s interior began in earnest. Following the French and Indian War and the American Revolution, Great Britain and the United States were the major western presence on the continent. These two societies brought with them a world view that differed markedly from that of the Native Americans. Exclusive ownership, control, and exploitation of the land and its resources were central to the western world view. Native American societies also sought to exploit nature’s abundance, but their technologies and economies evolved views of the land in which balance and non-exclusive use were central. The differences in these two perspectives on the land are reflected in the maps these two traditions produced. Western maps describe land as an object; their mapping systems use conventions like scale and the coordinate system to “accurately” picture the land and establish the boundaries of ownership that define it. Native American oral maps are fluid pictures of a dynamic landscape, a geography in which experience shapes the past and present of the land. By the end of the nineteenth century this indigenous conception of the landscape, along with many of the indigenous people who inhabited it, was replaced by the western view of nature as conquered, controlled, and exploited for the progress of civilization.
Ironically, the record of exploration contains numerous references to Native American participation in this process. From the Caribbean Natives who showed Columbus where he could find the gold he was seeking, to the Indian informants who supplied Lewis and Clark with maps that helped to open up the Louisiana Territory, western society has relied upon Native Americans to help fill in the details of its maps. While a few western maps recognize the Native American contribution, most simply used the information to fill in the picture of lands they claimed in the name of western kings, gods, and countries. Below the surface of these noble statements, there is a record of less glorious undertakings. The transformation of North America required that American Indians’ land had to be appropriated, wars of extermination had to be waged, and entire populations had to be forcibly relocated. Far from simply adding westernized names to a paper landscape and tracing the course of exploration and development, the remapping of North America involved a path of conquest and repression.
The maps that accompanied these processes were made to show first a European and then an American audience the extent and character of the newly claimed landscape. They transformed the territory, giving it familiar names, noting the locations of important resources, and drawing in the boundaries and communications routes that reflected western inhabitation. For years these early maps have provided scholars and pedants with fodder for reconstructing the minutia of European exploration. But they are also a glimpse of the Native American geography. They offer insights into the territories occupied by American Indian groups, and a hint of the knowledge carried in their oral traditions.
(Warhus, Another America 1389)
As has been pointed out by David Allen of the MapHist list,
The use of colors to indicate information from Indian sources appears to be unique to the Velasco map.
(25 April 2005 post to MapHist list)
But a conceptual analogue is to be found in Captain John Smith’s map of 1612, which
uses a line of Maltese crosses to indicate the boundary between lands he surveyed, and lands described on the basis of information provided by the Indians. The use of a line of crosses in a black and white map seems to me to be conceptually analogous to the use of blue lines on a colored map. The areas around Chesapeake shown as based on information from the Indians on the Smith map agree closely with the blue color in that area on the Velasco map.
(David Allen, 27 April 2005 post to MapHist list)
Mark Warhus makes much the same point in his chapter 4, describing how
On the map, Smith noted the extent of his explorations with a series of small Maltese crosses. The key in the upper right explains the “Significance of these marks, To the crosses has been discovered what beyond is by relation.” And in his Description of Virginia, the book in which this map was printed, Smith explains “that as far as you see the little Crosses on rivers, mountains, or other places, has been discovered; the rest was had by information of the Savages, and set downe according to their instructions.” Nearly half of the map lies outside the area delineated by the crosses and includes the course of rivers, the location of Indian settlements, and the territories of other tribes as communicated by American Indians. Even the area supposedly mapped by Smith is still largely a Native American landscape with the names and settlements of Virginian Algonquian Indians spread out along the rivers.
Smith’s published map of 1612 and 1624, which so clearly delineates “the limited extent of the area actually seen by the explorer” (Warhus 146), probably relied on the Velasco Map of 1610/11 for more than just its novel visual coding of the topographical “relations” of Native American sources. Alexander Brown has argued for even closer ties:
It seems to me certain that this map [Smith’s Map of Virginia] was engraved from a copy of the Virginia part of CLVIII [the Velasco Map of 1610/11]. Correct maps must be alike; but when one inaccurate map follows so closely another, as in this case, it furnishes quite conclusive proof that the latter was copied from the former.... [Furthermore,] I have found no real evidence that Smith could draw a map....
(Brown, The Genesis of the United States ii:596)
Warhus describes Smith’s map and the earlier Velasco Map as
a picture of the meeting of two worlds that was taking place in the tidewater region of Virginia. It projects British colonial ambitions through Anglicized names such as “Cape Charles,” “Cape Henry,” and the newly founded settlement of “Jamestowne” placed upon the map. It is also a reflection of the American Indian geography that defined this region.... Smith’s map includes the names and locations of nearly two hundred Indian settlements in a region where the only viable British presence was the struggling settlement of Jamestown.
But this visual presentation of cross-cultural cartography in both the Velasco Map of 1610/11 and Smith’s map of 1612/24 was never a simple communication of geographical intelligence. The known landscape “of the Indians” was often at odds with the American landscape of European imagining:
The area depicted on Smith’s map, printed with “west” at the top, includes Chesapeake Bay and the eastern part of what is now Virginia, plus parts of Maryland, Delaware, and the Delmarva Peninsula that separates the Chesapeake Bay from the Atlantic. The rivers shown entering the bay include the Powhatan (now James), the Pamaunk (now York), the Rappahannock and the Patawomeck (Potomac). Smith had explored parts of this territory in 1607 and 1608, going up the various rivers and gathering information from the Algonquian- and Iroquoian-speaking Indians who inhabited this region. Smith’s map reflects the European belief that North America was simply an isthmus separating the Atlantic from the western sea and the Orient. The Charter for Virginia granted the colony the right to expand “from sea to sea,” and one of Smith’s goals was to follow the rivers to the other side of America. One of the pieces of information he obtained from Indian informants was that a four or five days’ journey would lead to “a great turning of salt water,” and the body of water pictured in the upper right (northwest) of the map has been alternatively interpreted as a representation of the Pacific Ocean, or more likely Lake Erie. Powhatan himself is said to have tried to correct Smith by drawing a map on the ground showing that no “western ocean” lay within his domain regardless of how much the Englishman may have wished it.
Still, the plural perspective of the Velasco Map, and its repackaging by Smith, are noteworthy.
For the most part, the Native American presence and traditions would be increasingly ignored and marginalized by subsequent European cartographers. (Warhus 142)
• a GALLERY exhibit on the 1610/11 map of North America, aka the “Velasco Map,” and its first printing by Alexander Brown in 1890 (reproduced as item CLVIII in Brown)
• a GALLERY exhibit on the 1608 “Chart of Virginia” which illustrated Captain John Smith’s A True Relation (reproduced as item LVII in Brown)
• a GALLERY exhibit on Captain John Smith’s “A Map of Virginia,” pub. in 1612 and 1624 (reproduced as item CCXLII in Brown); Smith’s ms. chart “A description of the land of Virginia,” sent to Francis Bacon in 1618 (reproduced as item CCXLIII in Brown); and Smith’s map of “Ould Virginia,” pub. in 1624
• a GALLERY exhibit on Powhatan’s mantle, a large deerskin cloak decorated with a symbolic map of the Powhatan chiefdom ca. 1608
|
Rare Disease Day is an observance held on the last day of February to raise awareness for rare diseases and improve access to treatment and medical representation for individuals with rare diseases and their families. European Organisation for Rare Diseases established this day in 2008 to raise awareness for unknown or overlooked illnesses. According to (EURORDIS), treatment for many rare diseases is insufficient, as are the social networks to support individuals with rare diseases.
A disease or disorder is defined as rare in Europe when it affects fewer than 1 in 2000.
A disease or disorder is defined as rare in the USA when it affects fewer than 200,000 Americans at any given time.
One rare disease may affect only a handful of patients in the EU (European Union), and another may touch as many as 245,000. In the EU, as many as 30 million people may be affected by one of over 6000 existing rare diseases.
- 80% of rare diseases have identified genetic origins whilst others are the result of infections (bacterial or viral), allergies and environmental causes, or are degenerative and proliferative.
- 50% of rare diseases affect children.
Over 6000 rare diseases are characterised by a broad diversity of disorders and symptoms that vary not only from disease to disease but also from patient to patient suffering from the same disease.
Relatively common symptoms can hide underlying rare diseases leading to misdiagnosis and delaying treatment. Quintessentially disabling, the patients quality of life is affected by the lack or loss of autonomy due to the chronic, progressive, degenerative, and frequently life-threatening aspects of the disease.
The fact that there are often no existing effective cures adds to the high level of pain and suffering endured by patients and their families.
Lack of scientific knowledge and quality information on the disease often results in a delay in diagnosis. Also the need for appropriate quality health care engenders inequalities and difficulties in access to treatment and care. This often results in heavy social and financial burdens on patients.
As mentioned, due to the broad diversity of disorders and relatively common symptoms which can hide underlying rare diseases, initial misdiagnosis is common. In addition, symptoms differ not only from disease to disease, but also from patient to patient suffering from the same disease.
Due to the rarity and diversity of rare diseases, research needs to be international to ensure that experts, researchers and clinicians are connected, that clinical trials are multinational and that patients can benefit from the pooling of resources across borders. Initiatives such as the European Reference Networks (networks of centres of expertise and healthcare providers that facilitate cross-border research and healthcare), the International Rare Disease Research Consortium and the EU Framework Programme for Research and Innovation Horizon 2020 support international, connected research.
How can things change?
Although rare disease patients and their families face many challenges, enormous progress is being made every day.
The ongoing implementation of a better comprehensive approach to rare diseases has led to the development of appropriate public health policies. Important gains continue to be made with the increase of international cooperation in the field of clinical and scientific research as well as the sharing of scientific knowledge about all rare diseases, not only the most “recurrent” ones. Both of these advances have led to the development of new diagnostic and therapeutic procedures.
Rare Disease Day is a great example of how progress continues to be made, with events being held worldwide each year. Beginning in 2008, when events took place in just 18 countries, Rare Disease Day has taken place every year since, with events being held in more than 90 countries in both 2017 and 2018.
However, the road ahead is long with much progress to be made.
Thank you for reading this. I hope you’ll make this more loud and spread awareness about this to many people as a step towards saving many lives. This awareness about the rarest of diseases will help audience to access information about the various rare diseases and successfully diagnose it through proper medical treatment. Let us all join hands and continue this chain. Please reblog this post or make your own so that we can reach to maximum people. This is the least we can do and I hope you’ll take a step forward and contribute to increase consciousness.
|
As a result, the air temperature rises with altitude, which influences the stratification stability of the troposphere and especially all convective processes. The area where this inversion occurs is called the inversion layer.
The inversion shields the lower air layer from the upper one, which is referred to as stable stratification . This is due to the higher density of the colder air layer, which largely suppresses turbulent mixing with the warmer air layer above. The cold air bubbles caused by inversions or shielded by them are responsible for cold records worldwide. As a result of the shielding, air pollutants and other additions can accumulate in the cooler, lower layer, especially in the case of inversions over urban areas . A particularly strong manifestation of such air pollution , which occurs especially over urban areas, is smog . Above the inversion layer, on the other hand, the distance view is significantly increased, whereby the view of a large area of haze near the ground is usually revealed.
Inversion weather conditions also change the propagation conditions for radio waves , as these are reflected back into the denser medium, here the cold soil air, at the density transition ( total reflection ). Radio amateurs use this effect to increase the range of their signals, and VHF broadcasting leads to overreaching . On the same basis, an inversion weather situation favors the propagation of sound close to the ground because it is refracted towards the ground and can therefore spread over great distances. The speed of sound is greater in warm air than in cold air.
Species and their formation
A very stable inversion is formed by the tropopause and is explained by the slowly increasing ozone concentration at an altitude of 10 to 15 kilometers . The ozone absorbs the very short-wave UV-B part of the solar radiation and thus leads to a temperature increase contrary to the general trend of temperature decrease.
Radiation inversion / soil inversion
Usually the air temperature decreases with increasing altitude. Flue gases from heating systems and car exhausts lead to increased dust concentrations in the air , especially in winter . This dust filters sunlight and is heated at the same time. If there is still no wind for days (plumes of smoke rise vertically), a warm, stable layer of air forms above the (large) chimneys, which remains above the cold ground due to heat radiation and thus above the cold air layers ("inversion"). The heat radiation of the ground further heats the dusty warm air layer, at the same time the weaker sunlight in winter, which is additionally filtered, can no longer sufficiently warm the ground and indirectly the cold air masses close to the ground. As a result, a “ cold air lake ” forms below and a “cloud of vapor” above, and the air masses mix only slowly. Car exhaust fumes then accumulate near the ground and, together with ground fog, result in "thick air" ( smog ). But because car engines also give off a lot of waste heat and traffic creates air turbulence, car traffic can contribute to the fact that layers of air close to the ground heat up more quickly.
Such inversion weather conditions can also occur briefly in summer, but then the sun is stronger; the warming of the soil and the resulting thermal dissolve the temperature differences and the inversion faster in summer.
A radiation inversion usually only affects the immediate proximity to the ground and is therefore also referred to as a ground inversion . It is caused by the radiation and thus cooling of the earth's surface and occurs especially in autumn and winter high pressure weather conditions, as the temperature is then particularly low and the lack of cloud cover favors the nightly cooling.
Around the time of the daily maximum air temperature, i.e. between noon and three o'clock, the surface of the earth is very warm, which also heats up the air above it. Because of the adiabatic temperature gradient near the ground and the consequent unstable stratification of the atmosphere, the air layers near the ground are mixed through convective processes. However, as the time of day increases, the amount of solar radiation and thus the warming of the earth's surface decrease. Since the radiation balance finally becomes negative, the surface of the earth and with it the layers of air near the ground begin to cool down. This ultimately results in an initially weak inversion in the evening hours, which practically prevents the vertical exchange of air. The warmer layers of air generated during the day at higher altitudes cannot prevent the ground from cooling down, which continues to progress. The usually weaker wind also contributes to this and increases the tendency to cool. An inversion several hundred meters thick can then have developed into the early hours of the morning. It is then broken down again in the morning with increasing solar radiation and is completely gone again by noon at the latest. The fumigation layer, which inevitably occurs when the inversion is broken down, with an unstable stratification on the ground and an inversion above it, lasts longer, the thicker the inversion layer. This condition, also known as lifted soil inversions, usually only exists for short periods of time, so that no significant accumulation of pollutants occurs.
The weaker the wind and the better the radiation, the stronger the resulting radiation inversion will be. Certain valley and basin locations therefore have a particularly high tendency to inversion. Such an inversion forms practically every night, especially when there is little cloudiness. If the temperatures are below the freezing point of the water, frost will occur . Only a strong wind can prevent or at least weaken this and is therefore an important feature, especially for farmers, on cloudy autumn and especially spring nights.
If a radiation fog is also created, the increased albedo can also lead to a longer-lasting radiation inversion, which then usually lasts for several days. This also explains a somewhat rarer case of radiation inversion at the top of haze layers. Since the albedo is very high here and the water droplets radiate strongly, the air temperature can drop so far that an inversion also occurs. These height inversions caused by radiation are closely linked to the stability of the haze or fog layer and consequently disappear with it. As a rule, however, such inversions initially sink to ground level, since the earth's surface is no longer heated by solar radiation and cools down accordingly.
Radiation inversion positions favor the formation of industrial snow .
An example of soil inversion is the phenomenon of the formation of a layer of clouds between the valley floor and the mountain peaks, which is documented in the Upper and Ostallgäu and Kleinwalsertal with the expression Obheiter (= "above bright "), whereby it is cool and cloudy under the clouds, but much warmer above the clouds and is sunny. This weather situation is particularly common in autumn and is very popular with mountain tourists because of the wide mountain views associated with it.
If air layers of great thickness are closed and offset in their height, the effect of the different path lengths for the individual air parcels and thus their different cooling according to the respective temperature gradient becomes apparent . A sinking , shrinking or subsidence inversion occurs , which is also known as height inversion due to its great height compared to other inversion layers .
When the air pressure is lowered, the air pressure rises and since the air is compressible, the layer thickness consequently decreases, which is equivalent to increasing the air density . Each air parcel within this air layer is lowered independently and therefore experiences a specific increase in temperature. The greater the difference in altitude that the air parcel covers, the greater this increase. But since an air parcel at the upper edge of the air layer under consideration travels a longer distance than an air parcel at the bottom of the layer, its temperature also increases more rapidly. This changes the temperature gradient within the then deeper layer compared to the formerly higher layer, which is intended to illustrate an example.
If you consider a dry adiabatic stratified atmosphere with a temperature of ten degrees Celsius on the ground, there is a temperature decrease as shown in the right figure by the black line. It shows a layer of air that has been lowered from a height of six to eight kilometers to a height of one to two kilometers. The layer thickness and depression are not realistic and halving the thickness does not correspond to the actual reduction in air pressure, so it was determined arbitrarily for the sake of simplicity. Four points were particularly emphasized in the diagram, each forming the upper and lower edge of the air layer. Before lowering, the air layer had a temperature of -75 ° C (A) on its upper side and -70 ° C (B) on the lower side. This corresponds to an extraordinarily sub-adiabatic temperature gradient of only two and a half degrees Celsius per kilometer, which is, however, at least the tendency, a prerequisite for the formation of a sinking inversion. This is followed by the lowering of the air layer, whereby the changes from A to C and from B to D should be considered. The lowered air layer then has a temperature of −20 ° C (D) on its bottom and −15 ° C (C) on its top. The temperature rises here by five degrees Celsius per kilometer.
Such a temperature reversal only occurs in pronounced high pressure weather conditions, especially in late autumn and winter. But even if the lowering should not be sufficient to generate an inversion, it at least weakens the temperature gradient and thus contributes to a further stabilization of the atmosphere. This often leads to several sink inversions lying on top of one another, which cause a rather complex stratification of the atmosphere. An important and comparatively stable special case of sinking inversion is the Passatin version . In the opposite case of an increase in the air layer, an inversion, regardless of its origin, can be reduced, but at least the gradient increases and the inversion is weakened.
Descent inversions become visible through their effect as a cloud barrier, because the vertical spread of a cloud stops abruptly at its bottom. The air humidity is also greatest there, while it has a minimum due to the adiabatic heating on the upper side of the inversion layer. It is also particularly noticeable that if the height of the inversion layer is sufficiently low, it can be observed that it is often much warmer in the mountains than in the valleys. For example, an increase in altitude of one kilometer can often result in a temperature increase of 15 ° C.
A slip or turbulence inversion is caused by advection , i.e. the approach of air masses in the horizontal.
A strong wind causes a thorough mixing of the initially subadiabatic stratified atmosphere. This instability with strong vertical movement of the air leads to an increasing approach of the temperature gradient to an adiabatic stratification within the mixing zone. However, the temperature gradient above this zone has not changed and is still sub-adiabatic, which causes an inversion relative to the mixing zone. The phenomenon usually occurs when, when a warm front is approaching , only the upper layers of air initially register warm air, while it has not yet reached the ground. This is especially the case in high pressure areas above the sea.
In contrast to a sinking inversion, the air humidity is highest here on the upper side of the inversion layer, since the air masses brought in usually contain more moisture than the cold air previously stored there and the convection phenomena caused a constant transport of moisture upwards. Below the inversion, stratus or stratocumulus clouds often form in the case of strong turbulence and cumulus clouds in the case of weak turbulence. In the case of foehn , too, slip inversions often occur, combined with the foehn clouds that are typical for this .
- Malberg H. (2002): Meteorology and Climatology. An introduction. 4th edition. Springer-Verlag, Berlin Heidelberg New York. ISBN 3-540-42919-0 .
- Andreas Kalt: Learning module "Stratification states in the atmosphere". Water in the atmosphere. In: WEBGEO basics / climatology. Institute for Physical Geography (IPG) at the University of Freiburg , accessed on December 14, 2010 .
- Vienna Air Quality Report 1987-1998. (PDF) In: wien.gv.at. City of Vienna , MA 22 , p. 24f , accessed on January 3, 2016 .
- Gottfried Hoislbauer: Bark lichens in the Upper Austrian central area and their dependence on environmental influences . In: Stapfia . 1979, p. 12 ( PDF on ZOBODAT [accessed January 3, 2016]).
- Weather and Climate - German Weather Service - Topic of the Day - Archive - Industrial Snow. In: dwd.de. German Weather Service , accessed on December 22, 2016 .
- What is industrial snow and how is it created? - Weather channel Kachelmann. In: wetterkanal.kachelmannwetter.com. Retrieved December 22, 2016 .
|
CAPTURE THE GOLD (see slide show below of game being played in class yesterday)
Description of Game
In this adventure challenge game the students(their teams) are required to cross an imaginary river using hula hoops. Once they get to the other side, they can disembark on land and take a piece of gold (a beanbag) and bring it with them back across the river and place it into their Pot (a bucket or hula hoop). The main rules are:
1. Without giving names, what were some of the negative ways that people communicated?
2. What were some of the positive ways?
3. Describe any times that you did not trust the actions of others? (no names)
4. Can you give examples of how an open mind was important when working on your teams.
5. How can your team more effectively move the bean bags (the gold) from one side to the other? (in a quicker, safer way)
6. How can you decrease the amount of running and moving yet still transport bean bags from one side of the river to the other?
9/21/2015 10:53:18 pm
11/7/2017 09:41:51 pm
Leave a Reply.
KAUST Faculty, Pedagogical Coach. Presenter & Workshop Leader.IB Educator. #RunYourLife podcast host.
|
Scientists have developed a new toolkit for the discovery of mineral deposits crucial to our transition to a “green economy.”
A study led by Lawrence Carter from the University of Exeter’s Camborne School of Mines, has given fascinating new insights into how to discover porphyry-type copper deposits.
Porphyry-type deposits provide most of the world’s copper and molybdenum, as well as large amounts of gold and other metals, which are of increasing demand for green technologies such as electric vehicles, wind turbines, and solar panels, and for power transmission. They are the principal target of many mining companies who employ a wide range of invasive and expensive exploration techniques to find them.
Porphyry-type deposits originally form several kilometers below the Earth’s surface above large magma chambers. Not only are they rare but most large near-surface examples have already been found. To meet future demand for copper, new methods are needed to discover deeper and possibly smaller deposits—using techniques that meet increasingly strict environmental regulations.
The researchers show that certain textures preserved in rock may be indicative of the types of physical processes that form these deposits, and may give an early indication of their location.
Previous understanding of such textures was disjointed because they are often small, poorly exposed, or are simply not recognized when encountered.
The new study was carried out in the Yerington district of Nevada where tilting of the upper crust has provided a globally unique cross-section through four porphyry-type deposits and their host rocks. Because of this, previous studies in the district have underpinned much of the current understanding of how porphyry-type deposits form.
Lawrence Carter, a final year PhD student and Research Associate at Camborne School of Mines, based at the University of Exeter’s Penryn Campus said: “We provide a textural framework for exploration geologists to assess the likely 3D architecture of porphyry-type deposits before employing more invasive and expensive techniques.”
Professor Ben Williamson, co-author of the study and associate professor in applied mineralogy at Camborne School of Mines added: “this innovative applied study, led by one of the UK’s leading young geo-scientists, will provide much needed field criteria for the discovery of economically important and green-technology-crucial porphyry-type deposits.”
The research was supported by NERC GW4+ DTP, the Society of Economic Geologists Foundation, and the NERC highlight topic “FAMOS.”
- This press release was originally published on the University of Exeter website
|
This day in history the Spanish-American War came to an end with an armistice. The war was a great victory for American and allowed it to expand its powers the Pacific and the Caribbean. The war was a disaster for the Spanish and it brought to an end their400-yearr empire in the Americas. The war between Spain and America was brief. The War comes to an end when Spain surrenders Cuba, Puerto Rico, and the Philippines to the United States.
The cause of the war was the explosion of the USS Maine in a Cuban port and anger over Spanish treatment of Cuban rebels.
The repressive measures that Spain took to suppress the guerrilla war, such as herding Cuba’s rural population into camps or into slums in towns angered many Americans. The newspapers whipped up anti-Spanish feeling. The explosion on the USS Maine was seen by many in America as an act of sabotage by the Spanish. The newspapers claimed that it was an act of Spanish aggression and called for a war against that nation. Much of Congress and a majority of the American public had little doubt that Spain was responsible, and wanted a war to be declared by President McKinley.
The US declared war on April 25th and sent armed forces to Cuba, where they sided with the Cuban rebels. In a series of battles, the Americans swept the Spanish from the battlefield. It was here that Teddy Roosevelt and the Rough Riders became legends.the ten-week war was fought in both the Caribbean and the Pacific. US naval power proved decisive, allowing expeditionary forces to disembark in Cuba against a Spanish garrison already facing widespread Cuban rebel attacks and a yellow fever epidemic.
American naval forces took the Spanish island of Guam and the Americans also deployed forces in the Philippines and Puerto Rico. The American navy defeated the Spanish fleet in Manila Harbour and this end the Spanish domination of the Philippines.
The once-proud Spanish empire was virtually dissolved, and the United States gained its first overseas empire. Puerto Rico and Guam were ceded to the United States, the Philippines were bought for $20 million, and Cuba became a U.S. protectorate.
The Spanish-American War only lasted some ten weeks.
The Americans left Cuba after two decades of at times controversial occupation. American domination of the Philippines was a very bloody affair. Many Filipinos wanted independence, especially in the Muslim south. Philippine insurgents who fought against Spanish rule during the war immediately turned their guns against the Americans. In this conflict, ten times more U.S. troops died defeating the Philippines rebels than in defeating Spain. Eventually, America also withdrew from the Philippines in 1945. However, Puerto Rico and Guam are still dependencies of the United States, since the end of the Spanish-American War.
|
Anatomy of the Knee Joint
The knee is one of the most complex and largest joints in the body and is very susceptible to injury. The meniscus is a small, C-shaped piece of cartilage in the knee. Each knee consists of two menisci, medial meniscus on the inner aspect of the knee and the lateral meniscus on the outer aspect of the knee. The medial and lateral menisci act as a cushion between the thighbone (femur) and shinbone (tibia).
Meniscal tears are one of the most common injuries to the knee joint. It can occur at any age but are more common in athletes involved in contact sports. The meniscus has no direct blood supply and for that reason, when there is an injury to the meniscus, healing is difficult.
Causes of Meniscal Injuries
Meniscal tears often occur during sports. These tears are usually caused by twisting motion or over-flexing of the knee joint. Sports such as football, tennis, and basketball involve a high risk of developing meniscal tears. They often occur along with injuries to the anterior cruciate ligament, a ligament that crosses from the femur (thighbone) to the tibia (shinbone).
Symptoms of Meniscal Injuries
Meniscal tears can be characterized into longitudinal, bucket handle, flap, parrot -beak and mixed or complex tears.
The symptoms of a meniscal tear include:
- Knee pain when walking
- A popping or clicking that may be felt at the time of injury
- Tenderness when pressing on the meniscus
- Swelling of the knee
- Limited motion of the knee joint
- Joint locking, if the torn cartilage gets caught between the femur and tibia, preventing straightening of the knee.
Diagnosis of Meniscal Injuries
A thorough medical history and a physical examination can help diagnose meniscal injuries.
The McMurray test is an important test for diagnosing meniscal tears. During this test, your doctor will bend the knee, then straighten and rotate it in and out. This creates pressure on the torn meniscus. Pain or a click during this test may suggest a meniscal tear.
In addition, your doctor may order imaging tests such as X-ray and MRI to help confirm the diagnosis.
Treatment of Meniscal Injuries
The treatment depends on the pattern and location of the tear. If the meniscal tear is not severe, your child's doctor may begin with non-surgical treatments that may include:
- Rest: Avoid activities that may cause injury. You may need to temporarily use crutches to limit weight-bearing.
- Ice: Ice application to reduce swelling
- Pain medications: Non-steroidal anti-inflammatory drugs (NSAIDs) to help reduce swelling and pain
- Physical therapy: Physical therapy for muscle and joint strengthening
If the symptoms persist and conservative treatment fails, you may need surgery to repair the torn meniscus.
|
Can Video Games Revolutionize Education?
The ability of video games to keep students’ attention, synthesize knowledge, and provide hours of practice on a variety of academic subjects without the need for human resources has attracted an increasing number of educators to the entertainment medium. The creators of Minecraft, for example, a game in which complex structures are built out of simple cubes, have created an educational version of the software. Students can be taught “mathematical concepts including perimeter, area, probabilities, as well as foreign languages.”
“Beyond teaching, video games can also offer useful information about how well a child is learning and can even provide helpful visual displays of that information… Video games can also provide instantaneous feedback—typically via scores—that teachers and students can use to determine how well students understand what the games are trying to teach them.”
One limitation of games seems to be their specific effect on the brain. While games can be very useful at increasing the memory capacity of children, as measured by a 2013 Cambridge University study, for example, those benefits did not extend to other areas such as their ability to grasp abstract concepts or express themselves in spoken or written language.
In his Big Think interview, Carnegie Mellon University professor Jesse Schell observes that there is a huge demand for video games to help improve people’s lives. Beyond the classroom, video games may act as life coaches for anyone who would like one:
Read more at Scientific American
Photo credit: Shutterstock
|
Tsunamis are capable of causing devastating damage. In 2004, when a massive earthquake struck off the coast of Sumatra, the resulting tsunamis killed an estimated 230,000 people spread across 14 countries, including Kenya, over 4,000 miles away.
Despite what you may have seen in cartoons or the latest doomsday blockbuster, tsunamis don’t look like giant versions of the type of curling waves that surfers crave. Instead, tsunamis more closely resemble flash floods. When tsunami waters hit the beach, they may only be as high as 10 feet. But as that water continues to surge inland, it can grow to a height of up to 100 feet and travel for miles. This fast-moving wall of water wreaks havoc on everything in its path, breaking windows, uprooting trees, and snapping power poles. The resulting soup of debris is likely to kill anyone who is pulled in. Surviving them requires a combination of good preparation, quick thinking, and decisive action.
One of the best ways to be prepared for a tsunami is to put together a simple bag of essential emergency supplies. A flashlight, warm clothes or blanket, battery pack for charging your cell phone, water, and food are all great items to put in your bag. You can also purchase an NOAA Weather Radio to stay informed about emergency services and get notifications about when it’s safe to return to areas affected by the tsunami. Keep your bag near the door, and perhaps an extra in your car, so that you can grab it wherever you are and get to safety with your necessary gear quickly.
Being prepared also means having a plan with friends and family about how to reunite if a tsunami strikes when you’re separated. Discuss meeting points and communication strategies, and work together to make sure everyone understands proper survival techniques.
Recognize the Warning Signs
If you live within a few miles of the coast and feel an earthquake, assume that a tsunami will follow and begin your evacuation plan.
However, you may not feel the earthquake that triggers a tsunami that can still affect you; tsunamis can damage coastlines thousands of miles away from the shaking that caused them. Tsunamis are also not only caused by earthquakes, but can be triggered by a large landslide or meteor impact.
You may receive an official alert via television, text, or radio that a tsunami is imminent, but don’t absolutely count on that; tsunamis can move in deep water at 600 miles an hour and strike before an alert has been issued.
In the absence of a tremor or an alert, keep a lookout for a natural sign that a tsunami is coming: the ocean tide has receded farther and more rapidly than usual. When offshore earthquakes occur, the displacement caused by the movement of the earth pulls water away from coastlines. As the earth settles, that water rushes back towards shore, creating a tsunami. If you’re near the beach and seeing portions of the seafloor you’ve never seen before, get moving.
Get Away From Water
The best thing to do in the event of an oncoming tsunami is to get away from the coast, as well as rivers and estuaries near the coast that will flood quickly. Experts say that it’s best to try and get at least 100 feet above sea level or two miles away from the coast. Whether you go up or away depends on your situation. Those living in remote, rural areas may be able to hop in their car and drive quickly away from the water. Urban dwellers are less likely to be able to drive, especially after an earthquake which will likely cause damage to roads and bridges.
If you can’t get away, go up. Whether it’s the rooftop of a nearby building or the biggest hill in a nearby park, getting high is your best option if you can’t get further inland.
In the event a tsunami strikes and you’re caught directly in its path, know that it can move faster than you can run, and try to grab and hold on to a large piece of debris. People have survived the rush of water by climbing onto the roofs of houses that were torn off their foundations. By using larger debris, you can protect yourself from getting pulled down into the water and knocked around by smaller pieces of debris.
Stay Vigilant and Be Patient
Much like the way that aftershocks follow earthquakes, tsunamis are rarely reduced to a single occurrence. In fact, the first surge isn’t the highest, and larger second and third waves of tsunamis are capable of striking the shoreline over several hours. Your best bet, once you reach a safe area, is to stay put until you can confirm that the threat of danger is over.
|
Figure 6-t a) shows the bias-stabilized circuit incorporated into an ac amplifier stage. Both source resistance and load resistance are included. Figure 6-6(b) shows the ac equivalent circuit. of the amplifier. Note that RI and R1 are both connected to ac ground and are therefore in parallel as far as ac signals are concerned. These
are combined into RIJ = R, II R2 in the ac equivalent circuit. The total resistance from collector to ground is the parallel combination of the transistor output resistance r the collector resistor Re, and the load resistor R, : r, = r; II Rc II R ” = Rc II R “!n Fig 6-G(b), + mvj – y ~bols are used to show ins accountants polarities and to emphasize the fact that the ac emitter voltage u., is in phase with i’I>< and 1 that Art is 1800 out of phase with v~. It can be seen in the figure that v,. (which is also out of phase with u) is (6-19) The significance of equation 6-19 is that he ac load voltage is reduced by the amount Vf from what it would otherwise be if Rt: were not present. This reduction in load voltage due to RE is called degeneration. Using the sane dire /nation that was used to show that the voltage gain Au equals RD. when the only resistance in the emitter circuit was re v we can show that the voltage gain when the emitter circuit resistance is r, + Rr: is given by A u r, + RE (6-20) Since the denominator in equation 6-20 now includes RE, it is clear that the voltage gain is considerably smaller than it would be if R” were O. Thus, equation 6-20 shows the extent to which gain is reduced by degeneration. In most practical circuits, R£ » r., so a good approximation for the voltage gain is (6-21) While it is desirable for R£ to be as large as possible to achieve good bias stability, we see that large values of R,; reduce the amplifier voltage gain. The fact
that voltage gain must be sacrificed for bias stability, or vice versa, is a good illustration of the trade-off principle that underlies all electronic design problems: It is inevitably necessary to·trade one desirable feature for another. In other words, the improvement of one aspect of a circuit’s performance is achieved only at the expense of another aspect. The art of electronic circuit design is the ability to make reasonable compromises that satisfy all the requirements of a given application. One desirable implication of equation 6-21 is that it makes the ac voltage gain essentially independent of the transistor parameter r., and thus independent of the transistor used in the circuit as well as its bias point. Note that A. now depends only o« e. external resistor values: Au ” =ri! RF. = -(Rc II RL)I RE• Thus, by sacrificing the magnitude of the voltage gain, we achieve predictability. In some applications.it may be far more important to have an amplifier whose voltage gain will only vary from. say, 9.5 to 10.5 when different transistors are used than it would be to have a large gain that could vary from 90 to 200 One way to retain the desirable effect of RE on bias stability and still achieve a large voltage gain is to connect a capacitor in parallel with RE, as shown in Figure 6-7. The capacitor should be large enough to have an impedance that is negligible in comparison to RE at all frequencies of the ac signal. When this is the case, the emitter is at ac ground because the ac resistance in the emitter circuit is again simply r., The capacitor effectively “shorts out” R£ for ac signals, and it is called an emitter bypass capacitor, because it bypasses ac signals around RE to ground. Note that tile ed input resistance at the base is still R; “‘” (R, because the capacitor is an open circuit to , and bias stabilization is therefore maintained. Let us now consider the effect of the voltage-divider resistors RI and R2 in Figure 6-6(a) on the ac performance of the amplifier. As shown in Figure 6-6(b), the parallel combination RI II R2 = Ro appears between base and ground as far as ac signals are concerned. St therefore reduces the overall input resistance to the amplifier stage. as shown by equations 6-22 and 6-23: 1 The emitter bypass capacitor effectively connects the emitter 10 ac ground. As usual, TS and (stage) form a voltage divider across the input of the amplifier, Therefore, the reduction in (stage) caused by the presence of RB reduces the overall voltage gain of the amplifier. Note that R8 also reduces the overall current gain, because it provides a path to ground for some of the ac input current that would otherwise enter the base. Clearly, R; and R2 should both be as large as possible to prevent a serious deterioration in gain. Since it is desirable to have RB small from the standpoint of bias stability, we see that there is again a trade-off
between stability and gain. Equations 6-24 summarize the equations used to determine bias conditions and ac performance of the bias-stabilized CE amplifier.
|
Here we will consider whether temperamental differences are related to other aspects of children’s development. It must be emphasized that temperament is concerned with individual differences and therefore the impact on development centres on associations between temperament and variations in children’s cognitive and social development. There are several ways in which this can occur and these will be considered in turn.
Direct effect of temperament on development
A child with a short attention span and who is very impulsive is likely to experience difficulties in learning situations either at home or at pre-school groups (Tizard and Hughes, 1984). This example shows that temperamental differences may have a pervasive effect on children’s cognitive and social development through their impact on behavioural control and responsivity. In older children Keogh (1982) has identified a three factor model of temperament that is related to behaviour in school and which has implications for learning. The factors are Task Orientation, Personal-Social Flexibility and Reactivity.
Clearly factors such as task orientation will have a direct impact on the child’s ability to gain from learning experiences. Other temperamental influences will have more indirect effects on academic attainment. For example, reactivity is more likely to influence pupil–teacher and pupil–pupil interaction and thereby the social context within which learning takes place.
Direct effect of child temperament on parents
One of the central concepts in current thinking about child development is that of the child influencing its own development, i.e. not just being a passive receiver of externally determined experiences. Bell (1968) and Sameroff and Chandler (1975) are widely recognized as bringing this transactional model to the fore. Under this model the child plays a significant role in producing its own experiences both directly by its own selection of activities but, more importantly for the young child, by the influence its behaviour has upon caretakers (Sameroff and Fiese, 1990).
Indirect effect via ‘goodness of fit’
There has been a strand of thinking linked with the study of temperament that has emphasized that the significance of individual differences in temperament has to be considered in relation to specific environments. A child who is very low on adaptability and very high on rhythmicity using will have a more aversive experience if cared for by parents who are very erratic in their pattern of child care. The same child will be well suited to parents who are more regular in their routines of eating and sleeping. This suggests that the impact of temperament on development has to be analysed as an interaction between the child’s characteristics and features of the environment including parenting.
There have been several temperament theorists who have taken this position.
One of the most extensive research studies with this goodness of fit orientation is that of Lerner and colleagues:
The ‘goodness of fit’ concept emphasizes the need to consider both the characteristics of individuality of the person and the demands of the social environment, as indexed for instance by expectations or attitudes of key significant others with whom the person interacts (e.g. parents, peers or teachers). If a person’s characteristics of individuality match, or fit, the demands of a particular social context then positive interactions and adjustment are expected. In contrast, negative adjustment is expected to occur when there is a poor fit between the demands of a particular social context and the person’s characteristics of individuality. (Lerner, et al., 1989, p. 510)
As an illustration of this notion of the goodness of fit between the child’s temperament and parental behaviour Lerner et al. (1989) discussed some of the evidence concerning temperament and maternal employment outside the home. Of course a wide variety of social and economic pressures will be influencing the decision to work outside the home. However, in addition they suggest that there could be two plausible routes whereby difficult temperament could influence mothers’ decisions on whether to work outside the home. The first could be that mothers find the problems of rearing the child with difficult temperament too aversive and therefore opt to go out to work to avoid the hassles of daily child care.
The second route could be that the difficult child is so unpredictable in its eating and sleeping habits and protests intensely when left with unfamiliar people that the mother feels constrained not to go out to work because the child cannot fit in with the externally required constraints of the mother attending the work place at fixed times for fixed periods.
The goodness of fit approach suggests that which of these processes operates will depend on the fit between the child’s temperament and the mother’s tolerance. It will not be possible to predict the consequences of difficult temperament on the mother’s decision to return to work with knowledge of her attitudes towards child rearing and towards time keeping at work.
Lerner and Galambos (1985) found that mothers of children with difficult temperament tended to have more restricted work histories than other children.
One problem with this finding is that mothers’ reports on their infants’ ‘difficulty’ may be biased by factors that also affect work performance, such as depression. Hyde et al. (2004) examined this possibility in a study which found that the consensus infant temperament judgements of fathers and mothers were still a good predictor of mothers’ work outcomes. This study also found evidence that a mediating factor between infant temperament and maternal work outcome is maternal mood: difficult infants are likely to make mothers more depressed and diminish their sense of competence, thus affecting their work performance. The Lerner and Galambos (1985) study also found that it seemed to be harder for parents to make satisfactory day-care arrangements for difficult infants.
Indirect effect via susceptibility to psychosocial adversity
Temperament may also be related to differences in vulnerability to stress. Not all children are adversely affected by the experience of specific stresses, such as admission to hospital. Pre-school children repeatedly hospitalized are at risk for later educational and behavioural difficulties but only if they come from socially disadvantaged backgrounds (Quinton and Rutter, 1976).
It has proved more difficult to establish whether temperament does influence susceptibility to adverse experiences. Dunn and Kendrick (1982) have shown that an older child’s response to the arrival of a new sibling is systematically related to their temperament as measured whilst their mother was pregnant.
Most children respond to this event with some upsurge of behavioural disturbance, such as an increase in demands for parental attention or in crying. Which behavioural response is shown is related to prior temperament. Unfortunately their data do not suggest any clear pattern of any one aspect of temperament being more significant than any other. However, there were indications that increases in fears, worries and ‘ritual’ behaviours were associated with a high degree of temperamental Intensity and Negative Mood measured before the arrival of the second child.
Indirect effect on range of experiences
An important aspect of the transactional model of development is that as children become older they increasingly come to influence the range of environments they encounter and the experiences these create. During infancy, children with different temperament styles evoke different responses from the people they encounter, for example, active, smiling infants are more likely to be smiled at and played with than passive unresponsive infants. As children become more mobile and more independent they are able to select for themselves between alternative experiences, for example, a shy, behaviourally-inhibited child may avoid social encounters. This may accentuate temperamental characteristics: the avoidance of meeting other people prevents the child from becoming socially skilled and therefore more reluctant to engage in social behaviour in the future. This may have a wider impact on their development. For example, Rutter (1982) has demonstrated the way impulsive, active children are more likely to experience accidents, presumably as a result of their selecting more risky environments to play in.
These alternative mechanisms for the impact of temperament on the environments the child experiences can be classified into three types of gene-environment correlation. Scarr and McCartney (1983) have suggested that children’s genetic make-up comes to influence the environments they experience through three routes. These can be illustrated for temperament. One is passive gene-environment correlations which are produced when the child is being cared for by parents who share similar temperaments to the child. A child with a high intensity of reaction is more likely than other children to be cared for by a parent who has a similarly high intensity of reaction. Such parent–child pairs are likely to be creating experiences for the child which will be eliciting much aversive stimulation for the child. Evocative gene environment correlations are created when the child’s behaviour evokes specific types of responses from carers. This was illustrated in the earlier example of sociable children evoking more social stimulation from carers. The third type is active gene-environment correlation which arises from the child actively seeking environments that suit its behavioural predispositions. Children with a low threshold of responsiveness are likely to seek less extreme and more predictable environments.
An important feature of the Scarr and McCartney theory is that they propose that as the child becomes older the mix of these correlations will change. Initially the passive and evocative correlations will dominate. The evocative effects will remain fairly constant. The significance of passive effects decline in importance as the child encounters a wider range of people than just primarily the parents. Clearly active gene-environment effects are likely to become dominant as the child has greater and greater freedom to select its own activities.
Attachment and temperament
An important aspect of children’s early development is the quality of their attachment to their caregiver. A widely-used, standardised way of assessing this is a laboratory procedure called the ‘Strange Situation Test’ (SST; Ainsworth et al., 1978), consisting of a series of separations and reunions of child, caregiver and a stranger. Depending on how children behave during these episodes, their attachment is classified as either ‘secure’ or ‘insecure’. Insecure classifications are further subdivided into ‘avoidant’ or ‘ambivalent’ categories.
These different attachment styles are seen as important because they are associated with variations in children’s subsequent development; secure attachment is generally associated with more positive outcomes. Since the formation of attachment is bound up with how an infant behaves towards the caregiver during the first year of life it would seem likely that infant temperament is a significant element. It is surprising, then, that although some research has found that infant irritability and negative emotionality are linked with the avoidant type of insecure attachment, numerous studies have found no evidence that infant temperamental differences are associated directly with secure versus insecure attachment classifications in typical development (Goldsmith and Alansky, 1987).
One feature of caregiver behaviour during child’s first year that has been widely found to influence attachment quality is ‘sensitivity’ (De Wolff and Van IJzendoorn, 1997), namely, the extent to which the caregiver is attentive to the infant’s state, behaviour and communication, and responds appropriately. It might be expected that caregiver personality differences would thus be found to be associated with infant attachment security, but here again few direct effects have been found (Egeland and Farber, 1984).
What has been found, however, is that the combination of child and caregiver individual characteristics does predict attachment security (Belsky and Isabella, 1988; Notaro and Volling, 1999) lending support to a transactional model of the process.
Mangelsdorf et al. (2000) studied 102 mother-infant dyads in Michigan, U.S.A., to examine the contributions of maternal and infant characteristics to infant attachment. When the infants were 8 months old, their temperaments were assessed in a laboratory-based set of tasks, their mothers completed personality questionnaires (MPQ) for themselves and IBQ questionnaires on their infants, and then completed a brief teaching task with their infants. At twelve months of age, each infant’s attachment security was assessed with the SST.
Neither mothers’ nor infants’ characteristics, taken alone, were good predictors of infants’ attachment classification. However, when the joint effects of both mother and infant factors were examined, it was found that infants were classed as securely attached if they showed more positive emotions and fewer fearful reactions in the temperament assessments, but only if their mothers also showed more positive emotionality. These secure infants were also rated low in the IBQ on activity level and the amount of distress they showed to novelty but, again, only if their mothers also rated high on Constraint (self-control, conventionality) in the MPQ.
The researchers comment on these findings that:
The results of this investigation suggest that any individual characteristic of either child or mother may be less important than the relationship context within which that characteristic occurs. Mangelsdorf et al. (2000) p. 188.
- Temperament can directly influence other aspects of development, for example, attentional variation has an impact on cognitive development.
- Temperamental variation influences the parent’s response to the child.
- The goodness of fit between a child’s temperament and parental style can have an impact on the child’s attachment and long-term social adjustment.
- Temperament can influence a child’s vulnerability to the adverse effects of life events.
- Temperament can have a marked effect on the type and range of experiences to which the child is exposed.
Ainsworth, M. D. S., Blehar, M. C., Waters, E. and Wall, S. (1978) Patterns of Attachment, Hillsdale, N.J., Lawrence Erlbaum.
Bell, R. Q. (1968) ‘A reinterpretation of the direction of effects in studies of socialisation’, Psychology Review, vol. 75, pp. 81–95.
Belsky, J. and Isabella, R. (1988) ‘Maternal, infant and social-contextual determinants of attachment security’ in Belsky, J. and Nezworski, T. (eds) Clinical Implications of Attachment, pp. 253–99, New York, Lawrence Erlbaum.
De Wolff, M. S. and Van IJzendoorn, M. H. (1997) ‘Sensitivity and attachment: a meta-analysis on parental antecedents of infant attachment’, Child Development, vol. 68, pp. 571–591.
Dunn, J. and Kendrick, C. (1982) ‘Temperamental differences, family relationships and young children’s response to change within the family’ in Porter, R. and Collins, G. (eds) Temperamental Differences in Infants and Young Children, pp. 1–19, CIBA Foundation Symposium No. 89, London, Pitman.
Egeland, B., & Farber, E. A. (1984) ‘Infant-mother attachment: factors related to its development and changes over time’, Child Development, vol. 55, pp. 753–771.
Goldsmith, H. H. and Alansky, J. A. (1987) ‘Maternal and infant temperamental predictors of attachment: a meta-analytic review’, Journal of Consulting and Clinical Psychology, vol. 55, pp. 805–16.
Keogh, B. K. (1982) ‘Children’s temperament and teachers’ decisions’ in Porter, R. and Collins, G. (eds) Temperamental Differences in Infants and Young Children, pp. 269–85, CIBA Foundation Symposium No. 89, London, Pitman.
Lerner, J. V. and Galambos, N. L. (1985) ‘Maternal role satisfaction, mother–infant interaction and child temperament’, Developmental Psychology, vol. 21, pp. 1157–64.
Lerner, J. V., Nitz, K., Talwar, R. and Lerner, R. M. (1989) ‘On the functional significance of temperamental individuality: a developmental contextural view of the concept of goodness of fit’ in Kohnstamm, G. A., Bates, J. E. and Rothbart, M. K. (eds) Temperament in Childhood, pp. 509–22, Chichester, John Wiley.
Mangelsdorf, S.C., McHale, J.L., Diener, M., Goldstein, L.H. and Lehn, L. (2000) ‘Infant attachment; contributions of infant temperament and maternal characteristics’, Infant Behaviour and Development, vol. 23, pp. 175–196.
Notaro, P. C., & Volling, B. L. (1999) ‘Parental responsiveness and infant-parent attachment: A replication study with fathers and mothers’, Infant Behavior and Development, vol. 22, pp. 345-352.
Quinton, D. and Rutter, M. (1976) ‘Early hospital admissions and later disturbances of behaviour: an attempted replication of Douglas’s findings’, Developmental Medicine and Child Neurology, vol. 18, pp. 447–59.
Rutter, M. (1982) ‘Temperament: concepts, issues and problems’ in Porter, R. and Collins, G. (eds) Temperamental Differences in Infants and Young Children, pp. 1–19, CIBA Foundation Symposium No. 89, London, Pitman.
Sameroff, A. J. and Chandler, M. J. (1975) ‘Reproductive risk and the continuum of caretaking casualty’ in Harrowitz, F. D., Scarr-Salapatek, S. and Siegel, G. (eds) Review of Child Development Research, pp. 187–24, Vol. 4, Chicago, University of Chicago Press.
Sameroff, A. and Fiese, B.H. (1990) ‘Transactional regulation and early intervention’ in S. J. Meisels and J. P. Shonkoff (eds) Handbook of Early Childhood Intervention, pp. 119–149, New York, Cambridge University Press.
Scarr, S. and McCartney, K. (1983) ‘How people make their own environments: a theory of genotype-environment effects’, Child Development, vol. 54, pp. 424–35.
Tizard, B. and Hughes, M. (1984) Young Children Learning: talking and thinketing at home and at school, London, Fontana.
This extract from course ED209 is © Open University 2005
|
A passive heat sink solar greenhouse system is a natural heating process that has been utilized for centuries. Passive means that there are no active parts of the solar heating system, and if built correctly, this system should work on its own forever. A heat sink is a storage space that one fills with heat-absorbent material such as pumice stone, concrete blocks, water or fire bricks. There are also materials called phase change materials designed to store and release heat energy the same way these less expensive materials do. The absorbent materials are placed within or used to build a main wall of the greenhouse that gets the most sunlight. As the sunlight hits the wall, it warms the absorbent material within. At night, when the greenhouse becomes cool, the material will still give off residual heat, keeping the greenhouse warm. This system works both ways, also keeping the greenhouse cool in times of extreme summer heat.
All greenhouses are built to absorb solar energy. The glazed plastic or glass allows sunlight to pass through, heating the air inside the space and allowing for longer growing seasons. The problem with that simple design is that it works only in the daytime, when sunlight is hitting the glass or plastic. To keep your crops warm overnight or in cold weather, you must find a way to store the heat energy and use it later.
An active system is more effective, but it uses electricity and therefore can become too expensive for use in a small home greenhouse. These systems are more appropriate in large greenhouses, where a significant number of crops are being raised. Similar to the passive system, active heat sinks also utilize absorbent materials such as water or brick. A large hole is dug out beneath the greenhouse and filled with the absorbent material. A fan, possibly powered with a rooftop solar panel, is used to pump air through the heat sink, warming the material. The heat is stored in the chamber for nighttime and cold weather usage. Because this system allows the user to store the heat in chambers rather than against the wall, much more heat can be stored for later use. This type of system can also be accomplished with water in the place of air. In this type of system, water is pumped through outdoor pipes and allowed to collect solar heat energy. Then it is pumped through the chamber, giving heat to the absorbent material. At night, the water is again pumped through the heated chamber to gather heat and then through pipes in the greenhouse, distributing the heat energy.
Solar space heaters can be used to bump the temperature in a greenhouse significantly. A simple efficient heater can be made using aluminum cans, Plexiglas and a wooden frame. These heaters are only useful during times of direct sunlight, but they are especially nice during cold winter days with a lot of sun. The cans are painted black to absorb the maximum amount of heat. They are glued into the wooden frame in a fashion that exposes as much of them as possible to direct sunlight. Plexiglas or another material that allows light and heat to pass but not air is glued into the frame to prevent air from escaping. The air enters the solar heater at the bottom of the frame; as it passes through and around the cans, it is heated by the solar energy being collected there like a convection oven. The now warm air passes out through the top of the frame either directly into the greenhouse or through a short tube that directs it into the green house. If movement of air is a problem, a rooftop solar panel can be used to run a small fan that will help to move the air through the heater or throughout the greenhouse.
- Photo Credit Dougal Waters/Digital Vision/Getty Images
Homemade Greenhouse Heaters
There is no need to purchase expensive greenhouse heaters. ... Solar Heaters for Greenhouses. All greenhouses are built to absorb solar energy.
How to Heat Your Greenhouse Room With Solar Heat Sinks
How to Heat Your Greenhouse Room With Solar Heat Sinks. ... Heating a greenhouse with the rays of the sun is feasible...
How to Build Your Own Solar Greenhouse
Solar greenhouses differ from most greenhouses in that they both collect and store solar energy, providing warmth even when not directly exposed...
How to Heat Small Greenhouses Without Electricity
Artificial Heating in Greenhouses; Different Solar Pool Water Heating Designs; The Most Economical Way to Heat Your Home; How to Use a...
How to Build a Small Solar Heater
A small solar heater can generate enough hot air to raise the temperature in a small room a few degrees. ... Solar...
|
Publication Date: December 19, 2008
Series: Language and Literacy Series
Helping students master a broad range of individual words is a vital part of effective vocabulary instruction. Building on his bestselling resource The Vocabulary Book, Michael Graves’s new book describes a practical program for teaching individual words in the K–8 classroom. Designed to foster effective, efficient, and engaging differentiated instruction, Teaching Individual Words combines the latest research with vivid illustrations from real classrooms. Get ready to bridge the vocabulary gap with this user-friendly teaching tool!
Book Features include:
Michael F. Graves is Professor Emeritus of Literacy Education at the University of Minnesota and a member of the Reading Hall of Fame. He is the author of the bestselling book, The Vocabulary Book: Learning and Instruction.
"Michael Graves shows once again why he is one of our leading lights in vocabulary instruction.”
—Claude Goldenberg, Stanford University, School of Education
“Does the world need another book on vocabulary instruction? Yes, it needs this one, for no other text available addresses in detail how teachers can select, teach, and assess the meanings of specific words.”
— From the Foreword by James F. Baumann, University of Wyoming
“This comprehensive and practical resource dives into the heart of word learning and demonstrates how to maximize the benefits of teaching individual words. Dr. Graves skillfully highlights concrete examples of vocabulary instruction as rich and powerful tools that can be easily incorporated into a variety of educational settings. I highly recommend it.”
—Kari D. Ross, Curriculum Facilitator and Literacy Specialist, Centennial Schools, Minnesota
“Teaching Individual Words is a must-have book for any educator's bookshelf. In addition to clear models for instruction, it makes the often neglected point that teachers must also use sophisticated language themselves in order to encourage youngsters to do the same. Dr. Graves's readable style and practical knowledge make this book easy to use and sure to have an impact.”
—Linda Diamond, CEO, Consortium on Reading Excellence
|
USING COMMUNICATION TOOLS TO FOSTER CROSS-CULTURAL UNDERSTANDING
Today when people are more and more likely to interact and work with members of other cultures, a new educational priority is fast emerging, namely the need for educators to provide students with the skills and knowledge that will enable them to communicate effectively across different cultures. Language teachers are in an excellent position to play a large role in this endeavor since they teach both language and culture. However, more often than not, culture is relegated to the background of language classes, while the development of linguistic competence occupies front stage. The main thrust of the project described here is to reverse this equation and make culture the core of a language class, focusing on the development of students’ in-depth understanding of a foreign culture. The Cultura project, started in 1997 at MIT has been designed to develop cross-cultural understanding between French and American cultures, but since it is essentially a methodology it is applicable to the exploration of any two cultures, and versions in other languages have been developed elsewhere. This paper will (1) set up the background and the context of Cultura; (2) define its goals and approach; (3) show how web-based resources and electronic communication tools connect and intersect in order to meet these objectives; (4) show how the use of these tools is bound to change the way culture is taught in the classroom.
The general focus throughout this article will be on the process that enables students to gradually and collaboratively construct and refine their understanding of the other culture both in and outside of class. Specific examples will be given of how students, with the help of their peers and the teacher, gradually develop into what Byram (1996) calls the “intercultural learner.”CULTURA: ITS GOALS, TOOLS AND APPROACH
There are many ways to define culture. The goal of Cultura is to develop understanding of what Hall (1959, 1966) calls “the silent language” and “the hidden dimension,” namely, concepts, attitudes, values, ways of interacting with and looking at the world (Furstenberg, 2001). Teaching these aspects of culture is a huge challenge: how can we teach something that is essentially elusive, inaccessible, and invisible? It is all the more difficult as our own culture tends to be very opaque to ourselves, which means that we are faced with a double invisibility, so to speak. The question is how to make what is doubly invisible more apparent. What we need is an approach and some tools.
Approach and Tools
Very simply, the tools are a combination of the World Wide Web and its related communication tools. The approach is a comparative, cross-cultural one, whereby American students who are taking an intermediate French class at MIT and French students who are taking an English class at a French university, or Grande Ecole, under the direction of our partner in France, work together during the larger part of a semester. Sharing a common calendar and a common website, students, at first individually and then collaboratively, analyze a variety of similar materials originating from both cultures that are presented in juxtaposition on the Web, and subsequently enter into cross-cultural dialogues about these materials via on-line discussion forums.
Figure 1 below shows the home page of the website (http://mit.edu/french/21f.303/spring2004) dedicated to Cultura. The website provides a virtual space where the Brooklyn Bridge in New York and the Pont Neuf in Paris connect and merge. The accompanying phrase by Marcel Proust (1923) fully epitomizes what the project strives for -- “The only true exploration, the only real fountain of youth would not be to visit foreign lands but to possess other eyes and look at the universe through the eyes of others” (pp. 257-259).
The curve in the shape of a C is visibly reminiscent of the first letter of Cultura, but it also represents an itinerary with several modules that represent stages on the students’ road to exploration and discovery.
Figure 1. Home page of the Cultura Project
The overall methodology is a constructivist one whereby students, actively engaged in a process of discovery, constantly perceive and create new connections and gradually construct, with their peers and the help of the teacher, their own understanding of the subject matter (Brooks & Brooks, 1993).
Within Cultura, the French and American students engage in the following activities:
1. They compare a variety of similar French and American materials that are presented in juxtaposition on the Web. These materials include answers to a series of questionnaires the American and French students fill out at the beginning of the semester and which they will subsequently analyze. The students’ field of investigation broadens as they compare national polls allowing them to put their findings into a much larger socio-cultural framework; discuss a French film and its American remake, adding a visual dimension to their explorations; and read French and American press, looking at the way the same international event is covered, for instance, in Le Monde and the New York Times. A new module has been added recently, allowing students to exchange images of their respective cultural realities around topics selected by them. Finally, the journey ends with students having access to a library of historical, literary, anthropological, and philosophical texts by French and American authors writing about each others’ cultures, as well as primary fundamental documents such as the Bill of Rights and the Déclaration des Droits de l’Homme, allowing them to have direct access to “expert” texts and to find validation of their own findings.
2. At the core of the project are on-line discussion forums that allow students to exchange in writing their viewpoints and perspectives on all the subjects at hand. Through these cross-cultural exchanges, written in their own native language, students share observations, send queries, and answer their counterparts’ questions with the goal of deepening their understanding of their transatlantic partners’ perspectives, in a continuous, reciprocal and collaborative process of construction of each other’s culture.
Stage 1: Filling out the questionnaires
Understanding a foreign culture is a process, often a lifelong one. And like any process, journey or exploration, it needs to start somewhere. In the Cultura project, it starts with students anonymously answering a series of three questionnaires that are written in English and in French and are exact mirrors of each other. The MIT students respond in English, while the French students respond in French.
1. Word Association Questionnaire asks students to make associations with such words as school, police, money, elite, responsibility, individualism, freedom, work, success, power, and so forth.
2. Sentence Completion Questionnaire asks students to complete such sentences as “A good parent is someone who...”, “A good citizen is someone who …”, “A well-behaved child is someone who…”, “A good boss is someone who…”, “A good job…”, and so forth.
3. Hypothetical Situations Questionnaire asks students how they would react in such hypothetical situations, as “You are walking down the street in a big city. A stranger of the opposite sex approaches you with a big smile.” “You see a mother in a supermarket slap her child.” “You have been waiting in line for ten minutes and someone steps right in front of you.”
Stage 2: Analyzing the questionnaires
A few days later, student answers appear on the website, in a side-by-side format, allowing differences to immediately emerge and become visible. Below are some examples taken from different semesters.
The juxtaposition of the words banlieue/suburbs (each of which is the only possible translation of the other) clearly highlights how impossible it is to interchange the realities behind them. American and French students’ associations to these words are available at http://web.mit.edu/french/culturaNEH/spring2004_sample_site/answers/banlieue_w.htm. Whereas American students associate the word suburbs with words such as white, clean, houses, families, backyard, and white picket fence, the French associate banlieue with ghettos, chômage (“unemployment”), violence, and danger, a description much more often associated with an American inner city. Even though the words suburbs/banlieue cannot be translated in any other way, their representations cannot be transposed. The very process of juxtaposition allows their opposite underlying socio-economic realities to immediately emerge and become visible.
To support the claim that such cross-cultural encounters can indeed be powerful allies in the understanding of a foreign culture, a quote from the Russian philosopher Mikhail (1986) seems appropriate. He wrote, “A meaning only reveals its depths once it has encountered and come into contact with another foreign meaning” (pp. 6-7). This, we will see, applies to many other words, concepts, and situations.
In comparing and analyzing responses to the words individualism/individualisme, students discover much to their own dismay, that, even though the words are the same, the French and the Americans have radically different views of the concept. Whereas for the Americans, the connotations are in general extremely positive (free, freedom, independence, strength, pride), the French responses are replete with very negative words such as ego, egotism, egocentrism, indifference, and loneliness (http://web.mit.edu/french/culturaNEH/spring2004_sample_site/answers/individualisme_w.htm). A comparison of responses to what the French and MIT students consider to be a good parent reveals that for the French un bon parent is someone who educates in the French sense of the word, i.e., who guides, helps, and instills values, whereas a good parent for Americans is someone who is loving and caring (http://web.mit.edu/french/culturaNEH/spring2004_sample_site/archives/2001f/answers/bonparent.htm).
A detailed analysis of French and American responses to a situation at the bank, where “An employee reads your name on the check and addresses you with your first name” reveals to the students a wealth of cultural information and insights. The French answers, as opposed to the American ones, highlight the high value the French place on respect for social norms and conventions, when dealing with strangers in a professional context. Whereas, there are responses on both sides indicating that this would not pose a problem, the number of French students who react negatively to first name use is much higher. Their reactions range from mild disapproval such as “It’s a little bit too familiar, a little too friendly” to a more assertive stance, often tinged with a touch of irony or sarcasm, where they feel the violator of social norms needs to taught a lesson, “I would ask him whether we know each other,” “I would tell [emphasis mine] him we don’t know each other” or “We did not raise pigs together.” Some French students even express outrage. Students wrote “He owes me respect and has to call me by my family name”, “His position does not authorize him to do such a thing.” All of these examples make it abundantly clear to the American students that in France one does not/must not interact on a first name basis if one does not know someone personally, highlighting along the way the significance of social hierarchy and giving a very clear signal that there are strict boundaries that are not to be crossed (http://web.mit.edu/french/culturaNEH/spring2004_sample_site/archives/2004s/answers/bank_r.html).
It is very important to emphasize, from a methodological point of view, that these observations are made by the students on their own, as they analyze the responses very much in the same way as cultural anthropologists analyze raw data. They are the ones to discover in a very concrete way the high value the French place on formality, as opposed to teachers telling them about it in the abstract and in a cultural vacuum. This constructivist approach is radically different from the traditional learning environment in which the teacher is the one telling students how the French view the notion of individualism. It is also important to note that students do not necessarily see everything. They often take a perfunctory look and miss very important points. And this is where the teacher has the crucial task of providing students with guidelines on how to compare documents, such as counting the number of occurrences of the same word, creating categories, looking for cross-language equivalents, and noticing responses that appear on one side only.
These discoveries, of course, will lead to very interesting conversations in the forums about issues of formality, hierarchy, of the public vs. the private sphere, of where and how one draws the line between formality and informality, and the circumstances surrounding the use of tu vs. vous.
Stage 3: Making connections among answers to the questionnaires
Students share their discoveries in the classroom. They are encouraged to make connections among the questionnaires and to uncover patterns across them. Working in groups, students exchange their individual observations with each other, and then summarize their findings on white boards. For an illustration of this process, click on http://web.mit.edu/french/culturaNEH/classroom/pages/class14.htm) and http://web.mit.edu/french/culturaNEH/classroom/pages/class12.htm.
White boards play a vital role by making the observations visible to the whole class and allowing students to look across several sentences. Students then go from board to board, looking for commonalities in the responses (http://web.mit.edu/french/culturaNEH/classroom/pages/class19.htm). They draw arrows on the board, literally connecting different phrases (http://web.mit.edu/french/culturaNEH/classroom/pages/u18.htm) to see patterns emerge (http://web.mit.edu/french/culturaNEH/classroom/pages/class11.htm).
For instance, in the process of looking across responses to several questionnaires that asked for definitions of a good parent, a good doctor, or a good teacher, students discover that Americans tend to inject an affective slant into the relationships, whereas the French tend to look at them from a much more rational or distant point of view. Students noticed that the French tended to give responses pertaining to the role or function of that person. Un bon médecin and un bon prof are, above all, professionally competent, and a French good parent is someone who instills values in his/her children. Americans, on the other hand, seem to place a higher value on affective qualities. For them, a good parent loves unconditionally, a good doctor is caring, a good teacher is someone who can teach and [emphasis mine] care, who cares about his/her students, who deeply cares about the learning process, expressions that are actually very difficult to translate into French.
By analyzing and contrasting French and American attitudes towards work and school, for instance, students discover that Americans tend to put more emphasis on tangible rewards, results and achievements. For them, a good student is someone who gets good grades; a good job is one that pays well and gives promotions. French students, on the other hand, focus more on non-tangible rewards such as épanouissement “personal fulfillment”.
Like cultural archaeologists, students bring patterns to light and make initial connections that they will later attempt to confirm or revise in the light of new materials they will analyze.
Forums occupy a central position on the Cultura Web site symbolizing how crucial they are in this cultural exploration (http://mit.edu/french/21f.303/spring2004). This is where students ask their counterparts for help in deciphering the meanings of some words or concepts, where they present their own hypotheses and points of view, ask for help in verifying their assumptions and hypotheses, raise issues, and respond to their partners’ queries.
As we designed Cultura, we made three important decisions concerning the forums: (1) they would be written in the students’ native language, (2) they would be asynchronous, and (3) the teachers should never interfere. The fact that students write in L1 is a frequently misunderstood aspect of Cultura but it was a deliberate, carefully thought-out decision that seemed to us the only appropriate way to truly reach our stated objectives, i.e., the development of in-depth understanding of the other culture. Its benefits are threefold: (1) it puts all students on an equal linguistic footing; (2) students, not being bound by limited linguistic abilities, can express their views fully and in detail, formulate questions and hypotheses clearly, and provide complex, nuanced information; (3) student-generated texts provide the foreign partners with rich sources of authentic reading, and, in turn, become new objects of linguistic and cultural analysis, highlighting the different ways in which cultures can be expressed. It is also important to note that these postings are done outside of class and do not take anything away from students, contact time with the L2. On the contrary, the richness of language and ideas coming from L2 partners more than offsets what can be first perceived as a disadvantage.
It also seemed to us that asynchronous forums would better serve our purposes as they allow for more deliberate and thoughtful reflection on the part of the students. Finally, we felt it was important that forums be led entirely by the students who would be free to take them in any direction. As teachers, we did not want to be seen as interfering or even acting as mediators. The direct link between the students was very important to us.
Sample forum entries
Each word, sentence, or situation offered for analysis led to a forum that provided yet another crucial resource for helping students read between the cultural lines.
The forum about the bank employee calling the client by the first name elicited a high level interest on the part of the MIT students (some of them of Asian origin) who shared their personal experiences about formality vs. informality, and who were very curious to know why the French students so strongly disapproved of the bank teller addressing them by their first name. A full transcript of this forum can be seen at http://web.mit.edu/french/culturaNEH/spring2004_sample_site/archives/2004s/forums/bank_r.html. Below are some sample entries.
Here is what Susanna (an Asian student) wrote:
Talking about formality vs. informality, Alicia, another MIT student, then shared her own experience and asked a series of very pointed questions:
Below are some other questions posed by MIT students:
Here is how a French student tries to clarify things. The following is a translation from the French:
Besides making the MIT students aware of the complex interplay between tu and vous and of the number of conditions that need to be met until one can start using the tu form, such remarks make that issue concrete and real for them.
The conversations in the forums bring rich and complex information, providing students with a valuable insider’s view that sheds welcome light on issues, such as the form of address, that often remain quite opaque. It is during these conversations that the students’ diverse ethnic backgrounds and experiences often emerge, providing yet another voice.
So far, students have been working solely with answers to the three initial questionnaires provided by their transatlantic partners. Even though they learned a lot from their partners, we felt it was important to broaden their horizons and give them access to a range of other materials
National opinion polls
The next module, entitled Data/Chiffres, allows students to place the observations they have made up to this point within a national socio-cultural framework by giving them access to several American and French polling institutes (http://web.mit.edu/french/21f.303/spring2004/index_data.htm). Students are asked to explore statistics and opinion polls that will either confirm or contradict their earlier observations.
In their search, students might come across a poll that questioned the French about their notion of happiness (http://www.tns-sofres.com/etudes/pol/281004_bonheur_r.htm). In reading the answers to the question “Among the following items, which contribute most to your happiness?” students might observe that what counts most to the French is their family (52% put it first), while their professional life plays a much smaller role (only 21% believe that a fulfilling professional life contributes to their happiness). This particular poll might reinforce an observation American students had made in the analysis of the answers to the questionnaires, that family is extremely important to the French, whereas work is much less so. Looking at such polls is important because it validates the students’ own findings. This is further reinforced by forum discussions in which students querying each other about the importance of work in their lives. Contradictions between the national opinion polls and the answers to the questionnaires generates even more interesting questions and discussions, with students trying to figure out what is real and what is not.
This module allows students to compare French films and their American remakes. It provides yet another entry into the world of cultural differences by adding a key visual dimension. We have worked for a very long time with the 1985 French film Trois Hommes et un Couffin and its 1987 American remake Three Men and a Baby. Students on both sides of the Atlantic watch the movies during the same week, after which they embark upon on-line discussions. The following example illustrates how students attempt to put together the cultural jigsaw puzzle by looking for confirmation and contradictions.
Both the French and the American students noticed that the police were treated very differently in the two films. In the French movie, the police are made fun of, and the main characters choose to assist the drug traffickers rather than the police, even helping them to escape. In the American movie, the very opposite happens with the characters helping the police capture the drug traffickers.
The comments below illustrate how both sets of students attempt to explain the reasons for the difference. The forum discussion can be viewed at http://mit.edu/french/culturaNEH/cultura2001/archives/int/forums/menandpolice.html.
Stéphane (comment 1) and Sebastien (comment 4) are wondering whether the French reaction has anything to do with the perceived and even legendary aversion of the French toward authority. To which, Allison (comment 4.1) reacts by writing:
Fabrice, a French student, attempts to explain it in comment 13, given in translation below:
This last remark by Fabrice, of course, throws yet another wrench into the equation, as it implicitly warns American students that they might need to read between the cultural lines.
The Library/Bibliothèque module (http://web.mit.edu/french/culturaNEH/spring2004_sample_site/index_libr.htm) contains a variety of historical, literary, philosophical, and anthropological excerpts from primary and secondary texts by American authors about France, e.g., Edith Wharton and Polly Platt, or by French authors writing about the U.S., e.g., Simone de Beauvoir and Jean Baudrillard. This helps bring a multiplicity of intersecting perspectives on both cultures.
Reading such excerpts at the end rather than at the beginning of their cultural journey, takes on a lot more significance for the students. These texts help them assess how far they have come along in the process of deciphering French culture. As a matter of fact, students are often surprised to discover in these texts points of view and insights that they themselves had developed, to the point where their perceptions, unbeknownst to them, match the findings of cross-cultural experts.
They may, for instance, come across an excerpt from de Tocqueville (1961) (http://web.mit.edu/french/21f.303/spring2004/index_libr.htm):
These words are an eerie echo of an earlier comment by Matthew, an MIT student, in the forum on individualism:
What extraordinary validation for this student! Unbeknownst to him, he had put his finger on the reasons why the French place so much emphasis on equal rights, as opposed to individual rights. This type of insight, if not commonplace, is not unusual either. It illustrates how deep and insightful some of the students’ comments are and how proficient they had become at identifying cultural features and making relevant connections, to the point where their perceptions match those of cross-cultural experts.
These historical and literary texts also serve to bring out the historical and philosophical roots of the cultural phenomena the students themselves have discovered. For instance when comparing the American Bill of Rights and the French Déclaration des Droits de l’Homme, American students discover the all-important Article 4 in the latter that reads:
This single phrase suddenly illuminates for the Americans the roots of French attitudes toward the notion that freedom is limited – a view that American students had first discovered at the beginning of the semester when they analyzed responses to the words freedom/liberté.
The reading of Déclaration des Droits de l’Homme often brings the discussion back to the earlier topic of freedom, but this time seen through a different lens. After reading it, Kamal, an MIT student, wrote:
This provoked the following response from R.H., a French student:
This response clearly illustrates yet another aspect of the French culture, namely, the importance to set limits that cannot to be transgressed, whether it has to do with raising a child, with saying tu at inappropriate times, or with Government policies.
In the last two semesters, we added a feature that enables students to exchange not only opinions through written on-line forums, but also visual representations of their respective worlds, thus making the cultural reality they live in even more truly visible to their transatlantic peers. Not only do the students’ photos add a very concrete and essential dimension, but they themselves also become, like the forums, objects of analysis and form the basis for new discoveries and insights by providing yet another opportunity for making additional connections.
Students on each side of the Atlantic decide together which topics they want to illustrate, then upload their images to the Cultura website. Each image becomes the topic for subsequent on-line discussions. Thus far, a variety of topics have been illustrated, including a French banlieue and an American suburb, the subway in Boston and Métro in Paris, an American party and a French fête.
Below is an example of an exchange about the photos. One of the topics selected by students in the Spring of 2004 was the daily life of a student at MIT and one at the University of Paris II. One MIT student posted a picture of a food truck and in the accompanying message explained that it is a very popular option because the food is quite good and cheaper than anywhere else on campus. Caroline, a French student, reacted in this way. Below is a translation of her response:
At which point, Gaëtane, another French student, sent a picture of an array of very appetizing food at the “bon boulanger du coin” (corner bakery), explaining in great detail what she usually eats for lunch, along with how much it costs. Below is a translation of her response:
When comparing the photos documenting their daily lives taken by MIT and those taken by French students, Americans noticed, for instance, that the French tended to take and post more official photos of their class as a group, and more exterior photos showing the outside of things as opposed to the Americans who showed the inside of a dorm room and individuals in very personal poses such as sleeping. Americans related this observation to the tendency by the French, which they had noticed earlier, to keep a certain distance between their public and private lives. They also related this to another observation they had made earlier about the avoidance of the first person personal pronouns je and moi by the French.
It was evident that the American students reflected on the images the French were projecting about themselves and not simply on the content of these images, revealing their capacity to look beyond the surface, to make connections and to see the larger picture, thus becoming what Michael Byram (1997) defines as a true intercultural speaker with the term speaker being taken here in the broader sense of the word:
It is now obvious that the use of electronic media generates a new pedagogy that totally changes the roles of student and teacher. Students now take center stage and are involved in the very process of learning by exploring, analyzing and constructing, individually as well as collaboratively, their understanding of the foreign culture. They themselves research, question and revisit issues, make connections, and note contradictions, constantly expanding and refining their knowledge and understanding in the light of new materials.
This constitutes a major shift in educational practice. As teachers, we often want to make sure that students understand what we want them to understand, see what we want them to see, and learn what we want them to learn. We want to be in control of the end result. A constructivist approach, such as Cultura, reverses this notion by not expecting students to tell us what we want them to know, but to tell us what they have found, seen, observed and learned. But for that to happen, we need to make the process transparent. We also need to design appropriate interactive and collaborative classroom activities, as well as evaluation tools that focus on the process, such as log books and papers where students are asked to record their observations and regularly synthesize their findings, making sure along the way that they probe deeper and present more cogent arguments.
What interactive technologies do best is to focus the students’ attention on the process of learning, of acquiring knowledge and of building that knowledge. We believe that process to be central. Let us make sure that in the classroom our students do observe, analyze, create links, raise questions, develop new insights, arrive at new interpretations, synthesize and constantly refine their understanding of complex matters in a cooperative pursuit of knowledge. If that happens, the classroom will then become a place where teachers and learners work side by side, a place where teaching and learning truly come together, for the benefit of both. This new reality is made possible by technology, certainly. But it is also the result of the ways in which technology can expand and give new meaning to what it means to teach.
The initial Cultura Project was first started in 1997 thanks to a grant from the Consortium for Foreign Language Teaching and Learning based at Yale University, and then by a subsequent three-year grant from the National Endowment for the Humanities. It was developed by Sabine Levet (now at Brandeis), Shoggy Waryn (who now teaches at Brown), and myself. Cultura has allowed us to amass a large corpus of data and create archives that are accessible to everyone at http://web.mit.edu/french/culturaNEH/spring2004_sample_site/index_arch.htm
Other French Cultura projects have been developed or are being developed at Smith College and the University of Chicago.
Versions of Cultura have also been developed in German at the University of California at Berkeley and at Santa Barbara, in Italian at the University of Pennsylvania, in Russian and Spanish at Brown University and Barnard University. A Japanese version is in the process of being developed at the University of Washington.
ABOUT THE AUTHOR
Gilberte Furstenberg is a Senior Lecturer in French at MIT. She was born in France and educated in France at the Faculté des Lettres of Lille where she received her Agrégation. She is the main author of two award-winning multimedia programs: A la Rencontre de Philippe and Dans un Quartier de Paris. She has been teaching French language and culture courses at MIT for the last 25 years.
Bakhtin, M. (1986). Response to a question from the Novy Mir editorial staff. In M. Bakhtin, Speech genres & other late essays. Austin, Texas: University of Texas Press.
Brooks, J. & Brooks, M. (1999). In search of understanding: The case for constructivist classrooms. The Association for Supervision & Curriculum Development.
Byram, M. (1997). Teaching and assessing intercultural communicative competence. Clevedon, England: Multilingual Matters.
De Tocqueville, A. (1961). De la Démocratie en Amérique. Paris: Gallimard.
Furstenberg, G., Levet, S., English, K., & Maillet, K. (2001). Giving a voice to the silent language of culture: The Cultura Project. Language Learning & Technology, 5(1), 55-102. Retrieved September 15, 2005 from http://llt.msu.edu/vol5num1/furstenberg/default.html.
Hall, E. (1959). The silent language. Garden City, NY: Doubleday.
Hall, E. (1966). The hidden dimension. Garden City, NY: Doubleday.
Proust, M. (1923). La prisonnière. Paris: Gallimard,
Copyright © 2005 National Foreign Language Resource Center
Articles are copyrighted by their respective authors.
|
A carcinogen is any substance or agent that can cause cancer. A carcinogen can be a chemical, radiation, radionuclide (an atom with an unstable nucleus), virus, hormone, or other agent that is directly involved in the promotion of cancer or in the facilitation of its propagation. This may be due to genomic instability or to the disruption of cellular metabolic processes. The process of induction of cancer is called carcinogenesis (Bender and Bender 2005).
Common examples of carcinogens are tobacco smoke, inhaled asbestos, benzene, hepatitis B, and human papilloma virus. Ultraviolet light from the sun is tied to skin cancer. Several radioactive substances are considered carcinogens, but their carcinogenic activity is attributed to the radiation, for example gamma rays or alpha particles, that they emit.
The human body is a masterpiece of harmoniously interrelated cells, tissues, organs, and systems, all working together in coordination. Cancer represents a severing of this intricate coordination. Reducing exposure to carcinogens touches upon personal and social responsibility. There is a personal responsibility not to expose oneself unnecessarily to known carcinogenic agents, such as smoking tobacco. There also is a responsibility on behalf of society to identify cancer-causing agents, doing assessments for them, implementing laws to remove potential carcinogens, and providing educational programs to warn the public, despite the high costs of such efforts.
Cancer is a disease characterized by a population of cells that grow and divide without respect to normal limits, invade and destroy adjacent tissues, and may spread to distant anatomic sites through a process called metastasis. These malignant properties of cancers differentiate them from benign tumors, which are self-limited in their growth and do not invade or metastasize (although some benign tumor types are capable of becoming malignant).
Nearly all cancers are caused by abnormalities in the genetic material of the transformed cells. These abnormalities may be due to the effects of carcinogens, such as tobacco smoke, radiation, chemicals, or infectious agents. Other cancer-promoting genetic abnormalities may be randomly acquired through errors in DNA replication, or are inherited, and thus present in all cells from birth.
Carcinogens may increase the risk of getting cancer by altering cellular metabolism or damaging DNA directly in cells, which interferes with biological processes, and induces the uncontrolled, malignant division ultimately. Usually DNA damage, if too severe to repair, leads to programmed cell death, but if the programmed cell death pathway is damaged, then the cell cannot prevent itself from becoming a cancer cell.
Genetic abnormalities found in cancer typically affect two general classes of genes: Oncogenes and tumor suppressor genes. When these genes are mutated by carcinogens they contribute to malignant tumor formation (Narins 2005).
Oncogenes ("onco-" means tumor) are altered versions of normal genes, called proto-oncogenes, that encode proteins that are involved in such functions as regulating normal cell growth and division (Narins 2005). When the proto-oncogene is mutated to an oncogene by exposure to a carcinogen, the resultant protein may lack ability to govern cell growth and division, resulting in unrestrained and rapid cell proliferation (Narins 2005). In addition to hyperactive growth and division, cancer-promoting oncogenes may be activated which give cells such new properties as protection against programmed cell death, loss of respect for normal tissue boundaries, and the ability to become established in diverse tissue environments. Numerous cancers are associated with mutation in one particular proto-oncogene, ras, which codes a protein that acts to regulate cell growth (Narins 2005).
Tumor suppressor genes encode proteins that commonly tend to repress cancer formation. When they are inactivated by carcinogens, this results in loss of normal functions in those cells, such as accurate DNA replication, control over the cell cycle, orientation and adhesion within tissues, and interaction with protective cells of the immune system.
Carcinogens can be classified as genotoxic or nongenotoxic.
Genotoxic means the carcinogens interact physically with the DNA to damage or change its structure (Breslow 2002). Genotoxins cause irreversible genetic damage or mutations by binding to the DNA. Genotoxins include chemical agents like N-Nitroso-N-Methylurea (MNU) or non-chemical agents such as ultraviolet light and ionizing radiation. Certain viruses can also act as carcinogens by interacting with DNA.
Nongenotoxic are carcinogens that change how DNA expresses its information without changes in the DNA strucutre directly, or may create a situation whereby the cell or tissue is more susceptible to DNA damage from another source. Nongenotoxins do not directly affect DNA but act in other ways to promote growth. These include hormones and some organic compounds (Longe 2005). Examples of nongeotoxic carcinogens or promoters are arsenic and estrogen (Breslow 2002).
Some carcinogens also may interfere with cell division, by changing the structure or number of chromosomes in new cells after cell division (Breslow 2002). An example of this is nickel.
The following is the classification of carcinogens according to the International Agency for Research on Cancer (IARC):
Further details can be found in the IARC Monographs.
Carcinogens essentially produce cancer by changing the information cells receive from their DNA, resulting in accumulation of immature cells in the body, rather than the cells differentiating into normal, functioning cells.
There are many natural carcinogens. Aflatoxin B1, which is produced by the fungus Aspergillus flavus growing on stored grains, nuts, and peanut butter, is an example of a potent, naturally-occurring microbial carcinogen. Certain viruses such as hepatitis B and human papilloma viruses have been found to cause cancer in humans. The first one shown to cause cancer in animals was Rous sarcoma virus, discovered in 1910 by Peyton Rous.
Benzene, kepone, EDB, asbestos, and the waste rock of oil shale mining have all been classified as carcinogenic. As far back as the 1930s, industrial and tobacco smoke were identified as sources of dozens of carcinogens, including benzopyrene, tobacco-specific nitrosamines such as nitrosonornicotine, and reactive aldehydes such as formaldehyde—which is also a hazard in embalming and making plastics. Vinyl chloride, from which PVC is manufactured, is a carcinogen and thus a hazard in PVC production.
DNA is nucleophilic, therefore, soluble carbon electrophiles are carcinogenic, because DNA attacks them. For example, some alkenes are toxicated by human enzymes to produce an electrophilic epoxide. DNA attacks the epoxide, and is bound permanently to it. This is the mechanism behind the carcinogenity of benzopyrene in tobacco smoke, other aromatics, aflatoxin, and mustard gas.
After the carcinogen enters the body, the body makes an attempt to eliminate it through a process called biotransformation. The purpose of these reactions is to make the carcinogen more water-soluble so that it can be removed from the body. But these reactions can also convert a less toxic carcinogen into a more toxic one.
Co-carcinogens are chemicals which do not separately cause cancer, but do so in specific combinations.
CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act, the environmental law enacted by the United States Congress in 1980) identifies all radionuclides as carcinogens, although the nature of the emitted radiation (alpha, beta, or gamma, and the energy), its consequent capacity to cause ionization in tissues, and the magnitude of radiation exposure, determine the potential hazard. For example, Thorotrast, a (incidentally-radioactive) suspension previously used as a contrast medium in x-ray diagnostics, is thought by some to be the most potent human carcinogen known because of its retention within various organs and persistent emission of alpha particles. Both Wilhelm Röntgen and Marie Curie died of cancer caused by radiation exposure during their experiments.
Not all types of electromagnetic radiation are carcinogenic. Low-energy waves on the electromagnetic spectrum are generally not, including radio waves, microwave radiation, infrared radiation, and visible light. Higher-energy radiation, including ultraviolet radiation (present in sunlight), x-rays, and gamma radiation, generally is carcinogenic, if received in sufficient doses.
Cooking food at high temperatures, for example broiling or barbecuing meats, can lead to the formation of minute quantities of many potent carcinogens that are comparable to those found in cigarette smoke (i.e., benzopyrene) (Zheng et al. 1998). Charring of food resembles coking and tobacco pyrolysis and produces similar carcinogens. There are several carcinogenic pyrolysis products, such as polynuclear aromatic hydrocarbons, which are converted by human enzymes into epoxides, which attach permanently to DNA. Pre-cooking meats in a microwave oven for 2-3 minutes before broiling shortens the time on the hot pan, which can help minimize the formation of these carcinogens.
Recent reports have found that the known animal carcinogen acrylamide is generated in fried or overheated carbohydrate foods (such as french fries and potato chips). Studies are underway at the U.S. Food and Drug Administration (FDA) and European regulatory agencies to assess its potential risk to humans. The charred residue on barbecued meats has been identified as a carcinogen, along with many other tars.
Nevertheless, the fact that the food contains minute quantities does not necessarily mean that there is a significant hazard. The gastrointestinal tract sheds its outer layer continuously to protect itself from carcinomas, and has a high activity of detoxifying enzymes. The lungs are not protected in this manner, therefore smoking is much more hazardous.
Saccharin, a popular calorie-free sweetener was found to be a carcinogen in rats, resulting in bladder cancer (Breslow 2002). However, being carcinogenic in laboratory animals does not necessarily translate to being carcinogens in people because of differences in how substances are metabolized and how they produce cancer (Breslow 2002).
All links retrieved January 10, 2017.
|Tumors (and related structures), Cancer, and Oncology|
|Benign - Premalignant - Carcinoma in situ - Malignant
Topography: Anus - Bladder - Bone - Brain - Breast - Cervix - Colon/rectum - Duodenum - Endometrium - Esophagus - Eye - Gallbladder - Head/Neck - Liver - Larynx - Lung - Mouth - Pancreas - Penis - Prostate - Kidney - Ovaries - Skin - Stomach - Testicles - Thyroid
Morphology: Papilloma/carcinoma - Adenoma/adenocarcinoma - Soft tissue sarcoma - Melanoma - Fibroma/fibrosarcoma - Lipoma/liposarcoma - Leiomyoma/leiomyosarcoma - Rhabdomyoma/rhabdomyosarcoma - Mesothelioma - Angioma/angiosarcoma - Osteoma/osteosarcoma - Chondroma/chondrosarcoma - Glioma - Lymphoma/leukemia
Treatment: Chemotherapy - Radiation therapy - Immunotherapy - Experimental cancer treatment
Related structures: Cyst - Dysplasia - Hamartoma - Neoplasia - Nodule - Polyp - Pseudocyst
Misc: Tumor suppressor genes/oncogenes - Staging/grading - Carcinogenesis/metastasis - Carcinogen - Research - Paraneoplastic phenomenon - ICD-O - List of oncology-related terms
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
|
The /w/ sound is considered a glide or a semivowel sound by speech-language pathologists. In other words, /w/ sounds a lot like a vowel and sometimes even acts like one, even though it is technically a consonant. To make a /w/ sound, form a tight circle with puckered lips brought out and away from your face. With your lips in this position, produce a sound with your vocal cords while holding the back of your tongue towards the roof of your mouth, near the back.
It sounds complicated, but the /w/ sound is actually one of the earlier sounds that kids typically begin to master. Normally a child will start using /w/ around age 2 and should have a solid grasp on it by 3 years. If your child is still unable to produce the sound or has trouble using it in simple words by age 4, it is highly recommended that you seek the intervention of a licensed speech-language pathologist who can help your child get back on track. Remember: the sooner you identify a problem, the easier it is to correct and the less likely it is to affect your child’s ability to affect other sounds. That said, it is common and natural for children to interchange the /r/ sound for the /w/ sound, such as saying “wabbit” or “wight” for “rabbit” or “right” through ages 6-7.
Here are some fun ways to help engage your child while practicing the /w/ sound:
- Verbal cues
When you practice with your little one, it is important to demonstrate the sound clearly and correctly so that your child understands the sound and has an accurate source to imitate. Slowly make the /w/ sound for your child, exaggerating the movement on your mouth. Repeat this until your child begins to imitate you. Once they have mastered the individual sound, try combining it with vowels to form simple syllables, like “we, we, we” and “ew, ew, ew”.
- Visual Cues
When your mouth makes the /w/ sound, it happens to look a lot like you are about to kiss someone. Begin by practicing kisses with your little one and focusing on helping her to bring her lips together in a tight ‘O’. Blow kisses, kiss the air, kiss each other. Then point to your lips and make a new sound – the /w/ sound.
- Tactile Cues
The /w/ sound is also a voiced sound. This means that it vibrates your vocal cords when you say it. Put your hand on your throat as you make the /w/ sound to feel this, and let your little one put their hand on your throat too. Then encourage her to place her hand on her own throat as she says the sound.
After your child masters the sound and syllables with the /w/ sound, try practicing some words with your child. Find objects around your house or while walking through the grocery store that begin with /w/. For even more help, use this bright worksheet of simple /w/ words created by the popular and acclaimed blog www.mommyspeechtherapy.com.
|
Bio-bots – miniaturized walking biological machines
Researchers from the University of Illinois are making tracks in synthetic biology by designing non-electronic biological machines. They used a 3D printer to combine hydrogel and heart cells to create bio-robots. These functional machines are biocompatible and soft, and they are able to move by themselves. Once perfected, these bio-bots could be altered and specialized for various applications in medicine, energy and environment technologies.
“The idea is that, by being able to design with biological structures, we can harness the power of cells and nature to address challenges facing society”, said Rashid Bashir, an Abel Bliss Professor of Engineering. “As engineers, we’ve always built things with hard materials, materials that are very predictable. Yet there are a lot of applications where nature solves a problem in such an elegant way.”
A 3D printing method similar to one used in rapid prototyping is used to make the main body of the robot from hydrogel – a soft gelatin-like polymer. This approach enabled researchers to come up with a design with maximum speed and different configurations suitable for different applications, since they were able to quickly alter and test various versions of the design.
Bio-bots used to demonstrate their ability to walk are about 7 millimeters (a quarter of an inch) long, and the key to their locomotion is in asymmetry of their legs. Each bio-bot has one long and thin leg which rests on a stout supporting leg. The thin leg is covered with rat cardiac cells which pulse. When the heart cells beat, the long leg pulses and it moves the bio-bot forward.
“Our goal is to see if we can get this thing to move toward chemical gradients, so we could eventually design something that can look for a specific toxin and then try to neutralize it”, said Bashir, who also is a professor of electrical and computer engineering, and of bioengineering. “Now you can think about a sensor that’s moving and constantly sampling and doing something useful, in medicine and the environment. The applications could be many, depending on what cell types we use and where we want to go with it.”
University of Illinois researchers plan to enhance control and function, such as integrating neurons to direct motion or cells that respond to light, as well as discovering other variations of these bio-bots with different shapes, different numbers of legs, and abilities.
Once perfected, these robots could be used in medicine as medical sensors, for drug screening or chemical analysis, as well for toxin identification and cleanup. This can be achieved by a combination of this technology with cells which respond to certain stimuli in order to trigger detection or drug delivery.
For more information, you can red the paper published in the journal Scientific Reports: “Development of Miniaturized Walking Biological Machines” [2.7MB PDF].
|
For the first time, researchers have discovered supersonic plasma jets in Earth's upper atmosphere, and they're responsible for some pretty extreme conditions, including temperatures near 10,000°C (18,032°F).
These jets not only appear to be changing the chemical composition of Earth's ionosphere - they're actually pushing this atmospheric layer so far up, some of the planet's atmospheric materials are being leaked out into space.
More than a century ago, Norwegian scientist Kristian Birkeland proposed that vast electric currents powered by solar wind were travelling through Earth's ionosphere by the planet's magnetic field.
The ionosphere is an atmospheric layer spanning 75 to 1,000 km (46 to 621 miles) above Earth's surface, and once scientists finally figured out how to get satellites up there in the 1970s, the existence of these electric currents was confirmed.
Known as Birkeland currents, they carry up to 1 TW of electric power to the upper atmosphere - about a third of the total power consumption of the US in a year.
They're also responsible for the aurora borealis and aurora australis that light up the poles of the Northern and Southern Hemispheres.
More recently, scientists from the European Space Agency (ESA) have sent a trio of Swarm satellites into the space between Earth's ionosphere and magnetosphere to investigate the Birkeland currents.
Initially, these satellites detected incredibly large electrical fields, which are generated in the ionosphere where upwards and downwards Birkeland currents are interacting above the planet like so:
Now the satellite trio has discovered what these electrical fields are driving - extreme supersonic plasma jets that have been dubbed 'Birkeland current boundary flows'.
"Using data from the Swarm satellites' electric field instruments, we discovered that these strong electric fields drive supersonic plasma jets," says one of the team, Bill Archer from the University of Calgary.
"They can drive the ionosphere to temperatures approaching 10,000°C and change its chemical composition. They also cause the ionosphere to flow upwards to higher altitudes, where additional energisation can lead to loss of atmospheric material to space."
Weirdly enough, thanks to some other recent observations from the Swam satellites, we now know that similar systems are at play both in Earth's upper atmosphere and deep inside its liquid outer core.
Back in December, the ESA team announced that their Swam satellites had detected an accelerating river of molten iron some 3,000 km (1,864 miles) below the surface of Earth, under Alaska and Siberia.
They found that this 420-km-wide (260-mile) jet stream had tripled in speed in less than two decades, and is currently headed towards Europe.
Like the supersonic plasma jets that are zooming through our upper atmosphere, this fast-moving jet of molten iron is directly related to Earth's magnetic fields.
Differences in temperature, pressure, and composition within the outer core create movements and whirlpools in the liquid metal, and together with Earth's spin, they generate electric currents, which in turn produce magnetic fields.
Having now discovered Earth's outer core and upper atmosphere jets, researchers will be better equipped to predict what our magnetic field is going to do next, and that's important, because it looks like the North Pole is actually in the process of shifting as we speak.
As we explained last year, since Earth's magnetic field seems to have been weakening at a rate of about 5 percent per century, the magnetic field is expected to flip, at which point the magnetic north and south poles will trade places.
"Further surprises are likely," ESA's Swarm mission manager, Rune Floberghagen, said at the time. "The magnetic field is forever changing, and this could even make the jet stream switch direction."
The most recent Swarm findings have been presented at the 4th Swarm Science Meeting and Geodetic Mission Workshop in Canada this week, and a peer-reviewed study on the results is expected in the coming months.
|
The latest news about environmental and green
technologies – renewables, energy savings, fuel cells
Posted: Nov 07, 2013
Solar Energy: Improved Data mining tool for hourly solar radiation
(Nanowerk News) Solar energy is free, clean, and usually available in abundance. However, solar radiation is also less predictable than many kinds of fossilfuel. Researchers at the Institute of Networked and Embedded Systems have developed a model that allows a more accurate prediction of hourly solar radiation.
“The harnessing and use of solar energy will continue to gain relevance, particularly when viewed against the background of the elevated cost of fossil fuels and their negative impact upon the environment”, Tamer Khatib (Institute of Networked and Embedded Systems) explains. Together with his colleague Wilfried Elmenreich he has developed a new approach for improved data mining for hourly solar radiation.
Elmenreich goes on to say:“Solar radiation data provide information on how much of the sun’s energy strikes the Earth’s surface at a specific location during a defined time period”. These data are needed for effective research into solar energy utilization. Due to the cost and difficulty involved in obtaining solar energy measurements, these data are not readily available; therefore, researchers have explored alternative ways of generating these data.
Khatib elucidates further: “On the one hand, there are regions in the world, for which there are no solar radiation measurement data. Therefore, a tool is required, in order to assess the potential for solar energy. On the other hand, there are regions with daily averages of solar radiation data. These are less suitable for the evaluation of solar energy systems than solar radiation data that is generated on an hourly basis.” Researchers have been working on the development of smart prediction techniques, which can extrapolate an hourly average value from the daily data.
The “Smart Grid Lab” in Klagenfurt has now successfully developed such a model. Supplied with a total of six different inputs - mean daily solar radiation, hour angle, sunset hour angle, date, latitude and longitude- the model calculates the mean hourly solar radiation. Wilfried Elmenreich is pleased with the results: “The results prove that the model can predict the hourly solar radiation very well, and with an accuracy of prediction exceeding that of the empirical and statistic models used so far.”
Source: Alpen-Adria-Universität Klagenfurt
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk:
|
1. Energy is necessary for daily survival. Future development crucially depends on its long-term availability in increasing quantities from sources that are dependable, safe, and environmentally sound. At present, no single source or mix of sources is at hand to meet this future need.
2. Concern about a dependable future for energy is only natural since energy provides 'essential services' for human life - heat for warmth, cooking, and manufacturing, or power for transport and mechanical work. At present, the energy to provide these services comes from fuels - oil, gas, coal, nuclear, wood, and other primary sources (solar, wind, or water power) - that are all useless until they are converted into the energy services needed, by machines or other kinds of end-use equipment, such as stoves, turbines, or motors. In many countries worldwide, a lot of primary energy is wasted because of the inefficient design or running of the equipment used to convert it into the services required; though there is an encouraging growth in awareness of energy conservation and efficiency.
3. Today's primary sources of energy are mainly non-renewable: natural gas, oil, coal, peat, and conventional nuclear power. There are also renewable sources, including wood, plants, dung, falling water, geothermal sources, solar, tidal, wind, and wave energy, as well as human and animal muscle-power. Nuclear reactors that produce their own fuel ('breeders') and eventually fusion reactors are also in this category. In theory, all the various energy sources can contribute to the future energy mix worldwide. But each has its own economic, health, and environmental costs, benefits, and risks - factors that interact strongly with other governmental and global priorities. Choices must be made, but in the certain knowledge that choosing an energy strategy inevitably means choosing an environmental strategy.
4. Patterns and changes of energy use today are already dictating patterns well into the next century. We approach this question from the standpoint of sustainability. The key elements of sustainability that have to be reconciled are:
5. The period ahead must be regarded as transitional from an era in which energy has been used in an unsustainable manner. A generally acceptable pathway to a safe and sustainable energy future has not yet been found. We do not believe that these dilemmas have yet been addressed by the international community with a sufficient sense of urgency and in a global perspective.
6. The growth or energy demand in response to industrialization, urbanization, and societal affluence has led to an extremely uneven global distribution of primary energy consumption./1 The consumption of energy per person in industrial market economies, for example, is more than 80 times greater than in sub-Saharan Africa. (See Table 7-1.) And about a quarter of the world's population consumes three-quarters of the world's primary energy.
7. In 1980, global energy consumption stood at around 10TW./2 (See Box 7-1.) if per capita use remained at the same levels as today, by 2025 a global population of 6.2 billion/3 would need about 14TW (over 4TW in developing and over 9TW in industrial countries) - an increase of 40 per cent over 1980. But if energy consumption per head became uniform worldwide at current industrial country levels, by 2025 that same global population would require about 55TW.
Box 7-1 Energy Units
A variety of units are used to measure energy production and use in physical terms. This chapter user; the kilowatt (kW); the Gigawatt (GW), which is equal to 1 million kW; and the Terawatt (TW), which is equal to 1 billion kilowatts. One kilowatt - a thousand watts of energy - if emitted continuously for a year is lkW year. Consuming 1 kW year/year is equivalent to the energy liberated by burning 1,050 kilogrammes - approximately 1 ton - of coal annually. Thus a TW year is equal to approximately 1 billion tons of coal. Throughout the chapter, TW years/year is written as TW.
8. Neither the 'low' nor the 'high' figure is likely to prove realistic, but they give a rough idea of the range within which energy futures could move, at least hypothetically. Many other scenarios can be generated in-between, some of which assume an improved energy base for the developing world. For instance, if the average energy consumption in the low- and middle-income economies trebled and doubled, respectively, and if consumption in the high-income oil-exporting and industrial market and non-market countries remained the same as today, then the two groups would be consuming about the same amounts of energy. The low- and middle-income categories would need 10.5TW and the three 'high' categories would use 9.3TW - totalling 20TW globally, assuming that primary energy is used at the same levels of efficiency as today.
9. How practical are any of these scenarios? Energy analysts have conducted many studies of global energy futures to the years 2020-2030./4 Such studies do not provide forecasts of future energy needs, but they explore how various technical, economic, and environmental factors may interact with supply and demand. Two of these are reviewed in Box 7-2, though a much wider range of scenarios - from 5TW up to 63TW - are available. In general, the lower scenarios (14.4TW by 2030,/5 11.2TW by 2020./6 and 5.2 by 2030/7) require an energy efficiency revolution. The higher scenarios (18.8TW by 2025./8 24.7TW by 2020,/9 and 35.2 by 2030/10) aggravate the environmental pollution problems that we have experienced since the Second World War.
Box 7-2 Two Indicative Energy Scenarios
Case A: High Scenario
By the year 2030, a 35TW future would involve producing 1.6 tines as much oil, 3.4 times as much natural gas, and nearly 5 times as much coal as in 1960. This increase in fossil fuel use implies bringing the equivalent of a new Alaska pipeline into production every one to two years. Nuclear capacity would have to be increased 30 times over 1960 levels - equivalent to installing a new nuclear power station generating 1-gigawatt of electricity every two to four days. This 35TW scenario is still well below the 55TW future that assumes today's levels of energy consumption per capita in industrial countries are achieved in all countries.
Case B: Low Scenario
Taking the 11.2TW scenario as a highly optimistic example of a strong conservation strategy. 2020 energy demand in developing and industrial countries is quoted as 7.3TW and 3.9TW respectively, as compared with 3.3TW and 7.0TW in 1980. This would mean a saving of 3.1TW in industrial countries by 2020 and an additional requirement of 4.0TW in developing countries. Even if developing countries were able to acquire the liberated primary resource, they would still be left with a shortfall of 0.9TW in primary supply. Such a deficit is likely to be much greater (possibly two to three times), given the extreme level of efficiency required for this scenario, which is unlikely to be realized by most governments. In 1980, the following breakdown of primary supply was quoted: oil, 4.2TW; coal, 2.4; gas, 1.7; renewables, 1.7; and nuclear, 0.2. The question is - where will the shortfall in primary energy supply come from? This rough calculation serves to illustrate that the postulated average growth of around 30 per cent per capita in primary consumption in developing countries will still require considerable amounts of primary supply even under extremely efficient energy usage regimes.
Sources: The 35TW scenario was originated in Energy Systems Group of the International Institute for Applied Systems Analysis, Energy in a Finite World - A Global Systems Analysis, (Cambridge, Mass.: Ballinger, 1981); all other calculations are from J. Goldemberg et al., 'An End-Use Oriented Global Energy Strategy', Annual Review of Energy, Vol. 10, 1985.
10. The economic implications of a high energy future are disturbing. A recent World Bank Study indicates that for the period 1960-95, a 4.1 per cent annual growth in energy consumption, approximately comparable to Case A in Box 7-2, would require an average annual investment of some $130 billion (in 1982 dollars) in developing countries alone. This would involve doubling the share of energy investment in terms of aggregate gross domestic product./11 About half of this would have to come from foreign exchange and the rest from internal spending on energy in developing countries.
11. The environmental risks and uncertainties of a high energy future are also disturbing and give rise to several reservations. Four stand out:
Along with these, a major problem arises from the growing scarcity of fuelwood in developing countries. If trends continue, by the year 2000 around 2.4 billion people may be living in areas where wood is extremely scarce./15
12. These reservations apply at even lower levels of energy use. A study that proposed energy consumption at only half the levels of Case A (Box 7-2) drew special attention to the risks of global warning from CO2./16 The study indicated that a realistic fuel mix - a virtual quadrupling of coal and a doubling of gas use, along with 1.4 times as much oil - could cause significant global warming by the 2020s. No technology currently exists to remove CO2 emissions from fossil fuel combustion. The high coal use would also increase emissions of oxides of sulphur and nitrogen, much of which turns to acids in the atmosphere. Technologies to remove these latter emissions are now required in some countries in all new and even some old facilities, but they can increase investment costs by 15-25 per cent./17 If countries are not prepared to incur these expenses, this path becomes even more infeasible, a limitation that applies much more to the higher energy futures that rely to a greater extent on fossil fuels. A near doubling of global primary energy consumption will be difficult without encountering severe economic, social, and environmental constraints.
Energy is, put most simply, the fundamental unit of the physical world. As such, we cannot conceive of development without changes in the extent or the nature of energy flows. And because it is so fundamental, every one of those changes of flows has environmental implications. The implications of this are profound. It means that there is no such thing as a simple energy choice. They are all complex. And they all involve trade-offs. However, some of the choices and some of the trade-offs appear to be unequivocally better than others, in the sense that they offer more development and less environmental damage.
13. This raises the desirability of a lower energy future, where GDP growth is not constrained but where investment effort is switched away from building more primary supply sources and put into the development and supply of highly efficient fuel-saving end-use equipment. In this way, the energy services needed by society could be supplied at much reduced levels of primary energy production. Case B in Box 7-2 allows for a 50 per cent fall in per capita primary energy consumption in industrial countries and a 30 per cent increase in developing countries./18 By using the most energy-efficient technologies and processes now available in all sectors of the economy, annual global per capita GDP growth rates of around 3 per cent can be achieved. This growth is at least as great as that regarded in this report as a minimum for reasonable development. But this path would require huge structural changes to allow market penetration of efficient technologies, and it seems unlikely to be fully realizable by most governments during the next 40 years.
14. The crucial point about these lower, energy-efficient futures is not whether they are perfectly realisable in their proposed time frames. Fundamental political and institutional shifts are required to restructure investment potential in order to move along these lower, more energy-efficient paths.
15. The Commission believes that there is no other realistic option open to the world for the 21st century. The ideas behind these lower scenarios are not fanciful. Energy efficiency has already shown cost-effective results. In many industrial countries, the primary energy required to produce a unit of GDP has fallen by as much as a quarter or even a third over the last 13 years, much of it from implementing energy efficiency measures./19 Properly managed, efficiency measures could allow industrial nations to stabilize their primary energy consumption by the turn of the century. They would also enable developing countries to achieve higher levels of growth with much reduced levels of investment, foreign debt, and environmental damage. But by the early decades of the 21st century they will not alleviate the ultimate need for substantial new energy supplies globally.
16. Many forecasts of recoverable oil reserves and resources suggest that oil production will level off by the early decades of the next century and then gradually fall during a period of reduced supplies and higher prices. Gas supplies should last over 200 years and coal about 3.000 years at present rates of use. These estimates persuade many analysts that the world should immediately embark on a vigorous oil conservation policy.
17. In terms of pollution risks, gas is by far the cleanest fuel, with oil next and coal a poor third. But they all pose three interrelated atmospheric pollution problems: global warming,/20 urban industrial air pollution,/21 and acidification of the environment./22 Some of the wealthier industrial countries may possess the economic capacity to cope with such threats. Most developing countries do not.
18. These problems are becoming more widespread particularly in tropical and subtropical regions, but their economic, social, and political repercussions are as yet not fully appreciated by society. With the exception of CO2, air pollutants can be removed from fossil fuel combustion processes at costs usually below the costs of damage caused by pollution./23 However, the risks of global warming make heavy future reliance upon fossil fuels problematic.
19. The burning of fossil fuels and, to a lesser extent, the loss of vegetative cover, particularly forests, through urban-industrial growth increase the accumulation of CO2 in the atmosphere. The pre-industrial concentration was about 280 parts of carbon dioxide per million parts of air by volume. This concentration reached 340 in 1980 and is expected to double to 560 between the middle and the end of the next century./24 Other gases also play an important role in this 'greenhouse effect', whereby solar radiation is trapped near the ground, warming the globe and changing the climate.
20. After reviewing the latest evidence on the greenhouse effect in October 1985 at a meeting in Villach, Austria, organized by the WMO, UNEP, and ICSU, scientists from 29 industrialized and developing countries concluded that climate change must be considered a 'plausible and serious probability. They further concluded that: 'Many important economic and social decisions are being made today on ... major water resource management activities such as irrigation and hydropower; drought relief; agricultural land use; structural designs and coastal engineering projects; and energy planning - all based on the assumption that past climatic data, without modification, are a reliable guide to the future. This is no longer a good assumption'./25
21. They estimated that if present trends continue, the combined concentration of CO2 and other greenhouse gases in the atmosphere would be equivalent to a doubling of CO2 from pre-industrial levels, possibly as early as the 2030s, and could lead to a rise in global mean temperatures 'greater than any in man's history'./26 Current modelling studies and 'experiments' show a rise in globally averaged surface temperatures, for an effective CO2 doubling, of somewhere between 1.5°C and 4.5°C, With the warming becoming more pronounced at higher latitudes during winter than at the equator.
22. An important concern is that a global temperature rise of 1.5-4.5°C, with perhaps a two to three times greater warming at the poles, would lead to a sea level rise of 25-140 centimetres./27 A rise in the upper part of this range would inundate low-lying coastal cities and agricultural areas, and many countries could expect their economic, social, and political structures to be severely disrupted. It would also alow the 'atmospheric heat-engine', which is driven by the differences between equatorial and polar temperatures, thus influencing rainfall regimes./28 Experts believe that crop and forest boundaries will move to higher latitudes; the effects of warmer oceans on marine ecosystems or fisheries and food chains are also virtually unknown.
23. There is no way to prove that any of this will happen until it actually occurs. The key question is: How much certainty should governments require before agreeing to take action? If they wait until significant climate change is demonstrated, it may be too late for any countermeasures to be effective against the inertia by then stored in this massive global system. The very long time lags involved in negotiating international agreement on complex issues involving all nations have led some experts to conclude that it is already late./29 Given the complexities and uncertainties surrounding the issue, it is urgent that the process start now. A four track strategy is needed, combining:
24. No nation has either the political mandate or the economic power to combat climatic change alone. However, the Villach statement recommended such a four track strategy for climate change, to be promoted by governments and the scientific community through WMO, UNEP, and ICSU - backed by a global convention if necessary./30
It is difficult to imagine an issue with more global impacts on human societies and the natural environment than the greenhouse effect. The signal is unclear but we may already be witnessing examples, if not actual greenhouse effects, in Africa.
The ultimate potential impacts of a greenhouse warming could be catastrophic. It is our considered judgement that it is already very late to start the process of policy consideration. The process of heightening public awareness, of building support for national policies, and finally for developing multilateral efforts to slow the rate of emissions growth will take time to implement.
The greenhouse issue is an opportunity as well as a challenge; not surprisingly, it provides another important reason to implement sustainable development strategies.
25. While these strategies are being developed, more immediate policy measures can and should be adopted. The most urgent are those required to increase and extend the recent steady gains in energy efficiency and to shift the energy mix more towards renewables. Carbon dioxide output globally could be significantly reduced by energy efficiency measures without any reduction of the tempo of GDP growth./31 These measures would also serve to abate other emissions and thus reduce acidification and urban-industrial air pollution. Gaseous fuels produce less carbon dioxide per unit of energy output than oil or coal and should be promoted, especially for cooking and other domestic uses.
26. Gases other than carbon dioxide are thought to be responsible for about one-third of present global warming, and it is estimated that they will cause about half the problem around 2030./32 some of these, notably chlorofluorocarbons used as aerosols, refrigeration chemicals, and in the manufacture of plastics, may be more easily controlled than CO2. These, although not strictly energy-related, will have a decisive influence on policies for managing carbon dioxide emissions.
27. Apart from their climatic effect, chlorofluorocarbons are responsible to a large extent for damage to the earth's stratospheric ozone./33 The chemical industry should make every effort to find replacements, and governments should require the use of such replacements when found (as some nations have outlawed the use of these chemicals as aerosols). Governments should ratify the existing ozone convention and develop protocols for the limitation of chlorofluorocarbon emissions, and systematically monitor and report implementation.
28. A lot of policy development work is needed. This should proceed hand in hand with accelerated research to reduce remaining scientific uncertainties. Nations urgently need to formulate and agree upon management policies for all environmentally reactive chemicals released into the atmosphere by human activities, particularly those that can influence the radiation balance on earth. Governments should initiate discussions leading to a convention on this matter.
29. If a convention on chemical containment policies cannot be implemented rapidly, governments should develop contingency strategies and plans for adaptation to climatic change. In either case, WMO, UNEP, WHO, ICSU, and other relevant international and national bodies should be encouraged to coordinate and accelerate their programmes to develop a carefully integrated strategy of research, monitoring, and assessment of the likely impacts on climate, health, and environment of all environmentally reactive chemicals released into the atmosphere in significant quantities.
30. The past three decades of generally rapid growth worldwide have seen dramatic increases in fuel consumption for heating and cooling, automobile transport, industrial activities, and electricity generation. Concern over the effects of increasing air pollution in the late 1960s resulted in the development of curative measures, including air-quality criteria, standards, and add-on control technologies that can remove pollutants cost-effectively. All these greatly reduced emissions of some of the principal pollutants and cleaned air over many cities. Despite this, air pollution has today reached serious levels in the cities of several industrial and newly industrialized countries as well as in those of most developing countries, which in some cases are by now the world's most polluted urban areas.
31. The fossil fuel emissions of principal concern in terms of urban pollution, whether from stationary or mobile sources, include sulphur dioxide, nitrogen oxides, carbon monoxide, various volatile organic compounds, fly ash, and other suspended particles. They can injure human health and the environment, bringing increased respiratory complaints, some potentially fatal. But these pollutants can be contained so as to protect human health and the environment and all governments should take steps to achieve acceptable levels of air quality.
32. Governments can establish and monitor air quality goals and objectives, allowable atmospheric loadings, and related emission criteria or standards, as some successfully do already. Regional organizations can support this effort. Multilateral and bilateral development assistance agencies and development banks should encourage governments to require that the most energy-efficient technology be used when industries and energy utilities plan to build new or extend existing facilities.
33. Measures taken by many industrialized countries in the 1970s to control urban and industrial air pollution (high chimney stacks, for example) greatly improved the quality of the air in the cities concerned. However, it quite unintentionally sent increasing amounts of pollution across national boundaries in Europe and North America, contributing to the acidification of distant environments and creating new pollution problems. This was manifest in growing damage to lakes, soils, and communities of plants and animals./34 Failure to control automobile pollution in some regions has seriously contributed to the problem.
34. Thus atmospheric pollution, once perceived only as a local urban-industrial problem involving people's health, is now also seen as a much more complex issue encompassing buildings, ecosystems, and maybe even public health over vast regions. During transport in the atmosphere, emissions of sulphur and nitrogen oxides and volatile hydrocarbons are transformed into sulphuric and nitric acids, ammonium salts, and ozone. They fall to the ground, sometimes many hundreds or thousands of kilometres from their origins, as dry particles or in rain, snow, frost, fog, and dew. Few studied of their socio-economic costs are available, but these demonstrate that they are quite large and suggest that they are growing rapidly./35 They damage vegetation, contribute to land and water pollution, and corrode buildings, metallic structures and vehicles, causing billions of dollars in damage annually.
35. Damage first became evident in Scandinavia in the 1960s. Several thousand lakes in Europe, particularly in southern Scandinavia/36, and several hundreds in North America/37 have registered a steady increase in acidity levels to the point where their natural fish populations have declined or died out. The same acids enter the soil and groundwater, increasing corrosion of drinking water piping in Scandinavia./38
36. The circumstantial evidence indicating the need for action on the sources of acid precipitation is mounting with a speed that gives scientists and governments little time to assess it scientifically. Some of the greatest observed damage has been reported in Central Europe, which is currently receiving more than one gramme of sulphur on every square metre of ground each year, at least five times greater than natural background./39 There was little evidence of tree damage in Europe in 1970. In 1962, the Federal Republic of Germany reported visible leaf damage in its forest plot samples nationwide, amounting in 1983 to 34 per cent, and rising in 1985 to BO per cent./40 Sweden reported light to moderate damage in 30 per cent of its forests, and various reports from other countries in Eastern and Western Europe are extremely disquieting. So far an estimated 14 per cent of all European forestland is affected./41
37. The evidence is not all in, but many reports show soils in parts of Europe becoming acid throughout the tree rooting layers,/42 particularly nutrient-poor soils such as those of Southern Sweden/43 The precise damage mechanisms are not known, but all theories include an air pollution component. Root damage/44 and leaf damage appear to interact - affecting the ability of the trees both to take up water from the soil and to retain it in the foliage, so that they become particularly vulnerable to dry spells and other stresses. Europe may be experiencing an immense change to irreversible acidification, the remedial costs of which could be beyond economic reach./45 (See Box 7-3.) Although there are many options for reducing sulphur, nitrogen, and hydrocarbon emissions, no single pollutant control strategy is likely to be effective in dealing with forest decline. It will require a total integrated mix of strategies and technologies to improve air quality, tailored for each region.
A forest in an ecosystem that exists under certain environmental conditions, and if you change the conditions, the system is going to change. It is a very difficult task for ecologists to foresee what changes are going to be because the systems are so enormously complex.
The direct causes behind an individual tree dying can be far removed from the primary pressure that brought the whole system into equilibrium. One time it might be ozone, another time it may be SO2, a third time it may be aluminium poisoning.
I can express myself by an analogy: If there is famine, there are relatively few people who die directly from starvation: they die from dysentery or various infectious diseases. And in such a situation, it is not of very much help to send medicine instead of food. That means that in this situation, it is necessary to address the primary pressures against the ecosystem.
38. Evidence of local air pollution and acidification in Japan and also in the newly industrialized countries of Asia, Africa, and Latin America is beginning to emerge. China and the Republic of Korea seem particularly vulnerable, as do Brazil, Colombia, Ecuador, and Venezuela. So little is known about the likely environmental loading of sulphur and nitrogen in these region and about the acid-neutralizing capacity of tropical lakes and forest soils that a comprehensive programme of investigation should be formulated without delay./46
39. Where actual or potential threats from acidification exist, governments should map sensitive areas, assess forest damage annually and soil impoverishment every five years according to regionally agreed protocols, and publish the findings. They should support transboundary monitoring of pollution being carried out by agencies in their region and, where there is no such agency, create one or give the job to any suitable regional body. Governments in many regions could gain significantly from early agreement to prevent transboundary air pollution and the enormous damage to their economic base now being experienced in Europe and North America. Even though the exact causes of the damage are hard to prove, reduction strategies are certainly within reach and economic. They could be viewed as a cheap insurance policy compared with the vast amount of potential damage these strategies avoid.
Box 7-3 The Damage and Control Costs of Air Pollution
It is very difficult to quantify damage control costs, not least because cost figures are highly dependent on the control strategy assumed. However, in the eastern United States, it has been estimated that halving the remaining sulphur dioxide emissions from existing sources would cost $5 billion a year, increasing present electricity rates by 2-3 per cent. If nitrogen oxides are figured in, the additional costs might be as high as $6 billion a year. Materials corrosion damage alone is estimated to cost $7 billion annually in 17 states in the eastern United states.
Estimates of the annual costs of securing a 55 to 65 per cent reduction in the remaining sulphur emissions in the countries of the European Economic Community between 1980 and 2000 range from $4.6 billion to $6.7 billion (1982 dollars) per year. Controls on stationary boilers to reduce nitrogen levels by only 10 per cent annually by the year 2000 range between $100,000 and $400,000 (1982 dollars). These figures translate into a one-time increase of about 6 per cent in the price of electrical power to the consumer. Studies place damage costs due to material and fish losses alone at $3 billion a year, while damage to crops, forests, and health are estimated to exceed $10 billion per year. Technologies for drastically reducing oxides of nitrogen and hydrocarbons from automobile exhaust gases are readily available and routinely used in North America and Japan, but not in Europe.
Japanese laboratory studies indicate that air pollution and acid rain can reduce some wheat and rice crop production, perhaps by as much as 30 per cent.
Sources: U.S. Congress, Office of Technology Assessment, Acid Rain and Transported Air Pollutants: Implications for Public Policy (Washington, DC: U.S. Government Printing Office, 1985); U.S. Environmental Protection Agency, Acid Deposition Assessment (Washington, DC: 1985); I.M. Torrens, 'Acid Rain and Air Pollution: A Problem of Industrialization', prepared for WCED, 1985; P. Mandelbaum, Acid Rain - Economic Assessment (New York: Plenum Press, 1985); M. Hashimoto, 'National Air quality Management Policy of Japan', prepared for WCED, 1985; OECD, The State of the Environment (Paris: 1985).
40. In the years following the Second World War, the nuclear knowledge that under military control had led to the production of atomic weapons was redeployed for peaceful 'energy' purposes by civilian technologists. Several benefits were obvious at the time.
41. It was also realized that no energy source would ever be risk-free. There was the danger of nuclear war, the spread of atomic weapons, and nuclear terrorism. But intensive international cooperation and a number of negotiated agreements suggested that these dangers could be avoided. For instance, the Nonproliferation Treaty (NPT), drafted in its final form in 1969, included a promise by signatory governments possessing nuclear weapons and expertise to pursue and undertake nuclear disarmament ad also to assist the non-nuclear signatories in developing nuclear power, but strictly for peaceful purposes only. Other problems, such as radiation risks, reactor safety, and nuclear waste disposal were all acknowledged as very important but, with the right amount of effort, containable.
42. And now, after almost four decades of immense technological effort to support nuclear development, nuclear energy has become widely used. Some 30 governments produce from nuclear generators a total of about 15 per cent of all the electricity used globally. Yet it has not met earlier expectations that it would be the key to ensuring an unlimited supply of low-cost energy. However, during this period of practical experience with building and running nuclear reactors, the nature of the costs, risks and benefits have become much more evident and as such, the subject of sharp controversy.
43. The potential for the spread of nuclear weapons is one of the most serious threats to world peace. It is in the interest of all nations to prevent proliferation of nuclear weapons. All nations therefore should contribute to the development of a viable non-proliferation regime. The nuclear weapon states must deliver on their promise to reduce the number and ultimately eliminate nuclear weapons in their arsenals and the role those weapons play in their strategies. And the non-nuclear-weapon states must cooperate in providing credible assurances that they are not moving towards a nuclear weapon capability.
The health risks for the development of peaceful uses of nuclear technology, including nuclear electricity, are very small when compared with the benefits from the use of nuclear radiation for medical diagnosis treatment.
The safe application of nuclear radiation technology promises many benefits in environmental clean-up and in increasing world food supplies by eliminating spoilage.
With a recent and very notable exception, the international cooperation that has marked the development of nuclear power technology provides an excellent model by which to address common environmental and ethical problems posed by the development of other technologies.
44. Most schemes for non-proliferation mandate an institutional separation between military and civilian uses of nuclear energy. But for countries with full access to the complete nuclear fuel cycle, no technical separation really exists. Not all states operate the necessary clear-cut administrative separation of civilian and military access. Cooperation is needed also among suppliers and buyers of civilian nuclear facilities and materials and the International Atomic Energy Agency, in order to provide credible safeguards against the diversion of civilian reactor programmes to military purposes, especially in countries that do not open all their nuclear programmes to IAEA inspection. Thus, there still remains a danger of the proliferation of nuclear weapons.
45. The costs of construction and the relative economics of electricity generating stations - whether powered by nuclear energy, coal, oil or gas - are conditioned by the following factors throughout the service life of a plant:
46. All these factors vary widely depending on differing institutional, legal, and financial arrangements in different countries. Cost generalizations and comparisons are therefore unhelpful or misleading. However, costs associated with several of these factors have increased more rapidly for nuclear stations during the last 5-10 years, so that the earlier clear cost advantage of nuclear over the service life of the plant has been reduced or lost altogether./47 Nations should therefore look very closely at cost comparisons to obtain the best value when choosing an energy path.
47. Very strict codes of safety practice are implemented in nuclear plants so that under officially approved operating conditions, the danger from radiation to reactor personnel and especially to the general public is negligible. However, an accident occurring in a reactor may in certain very rare canes be serious enough to cause an external release of radioactive substances. Depending upon the level of exposure, people are under a certain level of risk of becoming ill from various forms of cancer or from alteration or genetic material, which may result in hereditary defects.
48. Since 1928, the International Commission on Radiological Protection (ICRP) has issued recommendations on radiation dosage levels above which exposure is unacceptable. These have been developed for occupationally exposed workers and for the general public. The 'Nuclear Safety Standards (NUSS) codes of IAEA were developed in 1975 to reduce safety differences among member states. Neither system is in any way binding on governments, if an accident occurs, individual governments have the responsibility of deciding at what level of radioactive contamination pasture land, drinking water, milk, meat, eggs, vegetables, and fish, are to be banned for consumption by livestock or humans.
49. Different countries - even different local government authorities within a country - have different criteria. Some have none at all, ICRP and NUSS notwithstanding. States with more rigorous standards may destroy large amounts of food or may ban food imports from a neighbour states with more permissive criteria. This causes great hardship to farmers who may not receive any compensation for their losses. It may also cause trade problems and political tension between states. Both of these difficulties occurred following the Chernobyl disaster, when the need to develop at least regionally conformable contamination criteria and compensation arrangements was overwhelmingly demonstrated.
50. Nuclear safety returned to the newspaper headlines following the Three Mile Island (Harrisburg, United States) and the Chernobyl (USSR) accidents. Probabilistic estimates of the risks of component failure, leading to a radioactive release in Western style light water reactors wore made in 1975 by the U.S. Nuclear Regulatory Commission./48 The most serious category of release through containment failure was placed at around 1 in 1,000,000 years of reactor operation. Post-accident analysis of both Harrisburg and Chernobyl - a completely different type of reactor - have shown that in both cases, human operator error was the main cause. They occurred after about 2,000 and 4,000 reactor-years respectively./49 The frequencies of such occurrences are well nigh impossible to estimate probabilistically. However, available analyses indicate that although the risk of a radioactive release accident is small, it is by no means negligible for reactor operations at the present time.
51. The regional health and environment effects of an accident are largely predictable from radioactive fall-out studies following early atomic weapons testing in the atmosphere and have been confirmed in practice following the Chernobyl accident. What could not be confidently predicted before Chernobyl were the local effects of such an accident. A much clearer picture is now emerging as a result of the experiences there when a reactor exploded, following a series of infringements of the official safety regulations, on 26 April 1986, causing the worst reactor accident ever experienced. As a result, the whole district had to be managed on something like a 'war footing' and efforts resembling a large military operation were needed to contain the damage.
52. Civil nuclear energy programmes worldwide have already generated many thousands of tons of spent fuel and high-level waste. Many governments have embarked on large-scale programmes to develop ways of isolating these from the biosphere for the many hundreds of thousands of years that they will remain hazardously radioactive.
53. But the problem of nuclear waste disposal remains unsolved. Nuclear waste technology has reached an advanced level of sophistication./50 This technology has not however been fully tested or utilized and problems remain about disposal. There is particular concern about future recourse to ocean dumping and the disposal of contaminated waste in the territories of small or poor states that lack the capacity to impose strict safeguards. There should be a clear presumption that all countries that generate nuclear waste dispose of it within their own territories or under strictly monitored agreements between states.
54. During the last 25 years, a growing awareness of the difficulties outlined above has resulted in a wide range of reactions from technical experts, the public, and governments. Many experts still feel that so much can be learned from the problems experienced up to now. They argue that if the public climate allows then to solve the nuclear waste disposal and decommissioning issues and the cost of borrowing money remains reasonably below its 1980-82 peak, in the absence of viable new supply alternatives there is no reason why nuclear energy should not emerge as a strong runner in the 1990s. At the other extreme, many experts take the view that there are so many unsolved problems and too many risks for society to continue with a nuclear future. Public reactions also vary. Some countries have exhibited little public reaction, in others there appears to be a high level of anxiety that expresses itself in anti-nuclear results in public opinion polls or large anti-nuclear campaigns.
Today the assessment of practical consequences can be based on practical experience. The consequences of Chernobyl has made Soviet specialists once again pose a question: Is not the development of nuclear energy on an industrial scale premature? Will it not be fatal to our civilization, to the ecosystem of our planet? On our planet so rich in all sorts of energy sources, this question can be discussed quite calmly. We have a real choice in this, both on a state and a governmental level, and also on the level of individuals and professionals.
We must put all our efforts to improve the technology itself, to develop and elaborate strict standards and norms of quality, of safety of a technology. We must work for the creation of anti-accident centres and centres devoting themselves to compensating for the losses to the environment. The upgrading of the industrial level of safety and the solution of the problem of the relations between man and machine would be a lot more natural thing to do than concentrating the efforts on only one element of the energy structure in the world. This would benefit the whole of humanity.
V. A. Legasov
55. And so, whilst some states still remain nuclear-free, today nuclear reactors supply about 15 per cent of all the electricity generated. Total electricity production worldwide is in turn equivalent to around 15 per cent of global primary energy supply. Roughly one-quarter of all countries worldwide have reactors. In 1986, there were 366 working and a further 140 planned,/51 with 10 governments possessing about 90 per cent of all installed capacity (more than 5 GW (e)). Of these, there are 8 with a total capacity of more than 9 GW (e),/52 which provided the following percentages of electric power in 1985: France, 65; Sweden, 42; Federal Republic of Germany, 31; Japan, 23; United Kingdom, 19; United States, 16; Canada, 13; and USSR, 10. According to IAEA, in 1985 there were 55 research reactors worldwide, 33 of them in developing countries./53
56. Nevertheless, there is little doubt that the difficulties referred to above have in one way or another contributed to a scaling back of future nuclear plans - in some countries, to a de facto nuclear pause. In Western Europe and North America, which today have almost 75 per cent of current world capacity, nuclear provides about one-third of the energy that was forecast for it 10 years ago. Apart from France, Japan, the USSR, and several other East European countries that have decided to continue with their nuclear programmes, ordering, construction, and licensing prospects for new reactors in many other countries look poor. In fact, between 1972 and 1986, earlier global projections of estimated capacity for the year 2000 have been revised downwards by a factor of nearly seven. Despite this, the growth of nuclear at around 15 per cent a year over the last 20 years is still impressive./54
57. Following Chernobyl, there were significant changes in the nuclear stance of certain governments. Several - notably China, the Federal Republic of Germany, France, Japan, Poland, United Kingdom, United States, and the USSR - have maintained or reaffirmed their pro-nuclear policy. Others with a 'no nuclear' or a 'phase-out' policy (Australia, Austria, Denmark, Luxembourg, New Zealand, Norway, Sweden - and Ireland with an unofficial anti-nuclear position) have been joined by Greece and the Philippines. Meanwhile, Finland, Italy, the Netherlands, Switzerland, and Yugoslavia are re-investigating nuclear safety and/or the anti-nuclear arguments, or have introduced legislation tying any further growth of nuclear energy and export/import of nuclear reactor technology to a satisfactory solution of the problem of disposal of radioactive wastes. Several countries have been concerned enough to conduct referenda to test public opinion regarding nuclear power.
58. These national reactions indicate that as they continue to review and update all the available evidence, governments tend to take up three possible positions:
The discussion in the Commission also reflected these tendencies, views, and positions.
59. But whichever policy is adopted, it is important that the vigorous promotion of energy-efficient practices in all energy sectors and large-scale programmes of research, development, and demonstration for the safe and environmentally benign use of all promising energy sources, especially renewables, be given the highest, priority.
60. Because of potential transboundary effects, it is essential that governments cooperate to develop internationally agreed codes of practice covering technical, economic, social (including health and environment aspects), and political components of nuclear energy. In particular, international agreement must be reached on the following specific items:
61. For many reasons, especially including the failure of the nuclear weapons states to agree on disarmament, the Nonproliferation Treaty has not proved to be a sufficient instrument to prevent the proliferation of nuclear weapons, which still remains a serious danger to world peace. We therefore recommend in the strongest terms the construction of an effective international regime covering all dimensions of the problem. Both nuclear weapons states and non nuclear weapons states, should undertake to accept safeguards in accordance with the statutes of IAEA.
62. Additionally, an international regulatory function is required, including inspection of reactors internationally. This should be quite separate from the role of IAEA in promoting nuclear energy.
63. The generation of nuclear power is only justifiable if there are solid solutions to the presently unsolved problems to which it gives rise. The highest priority must be accorded to research and development on environmentally sound and economically viable alternatives, as well as on means of increasing the safety of nuclear energy.
64. Seventy per cent of the people in developing countries use wood and, depending on availability, burn anywhere between an absolute minimum of about 350 kilogrammes to 2,900 kilogrammes of dry wood annually, with the average being around 700 kilogrammes per person./55 Rural woodfuel supplies appear to be steadily collapsing in many developing countries, especially in Sub-Saharan Africa./56 At the same time, the rapid growth of agriculture, the pace of migration to cities, and the growing numbers of people entering the money economy are placing unprecedented pressures on the biomass base/57 and increasing the demand for commercial fuels: from wood and charcoal to kerosene, liquid propane, gas, and electricity. To cope with this, many developing country governments have no option but to immediately organize their agriculture to produce large quantities of wood and other plant fuels.
65. Wood is being collected faster than it can regrow in many developing countries that still rely predominantly on biomass wood, charcoal, dung, and crop residues - for cooking, for heating their dwellings, and even for lighting. FAO estimates suggest that in 1900, around 1.3 billion people lived in wood-deficit areas./58 If this population-driven overharvesting continues at present rates, by the year 2000 some 2.4 billion people may be living in areas where wood is 'acutely scarce or has to be obtained elsewhere'. These figures reveal great human hardship. Precise data on supplies are unavailable because much of the wood is not commercially traded but collected by the users, principally women and children, but there is no doubt that millions are hard put to find substitute fuels, and their numbers are growing.
66. The fuelwood crisis and deforestation - although related are not the same problems. Wood fuels destined for urban and industrial consumers do tend to come from forests. But only a small proportion of that used by the rural poor comes from forests. Even in these cases, villagers rarely chop down trees; most collect dead branches or cut them from trees./59
67. When fuelwood is in short supply, people normally economize; when it is no longer available, rural people are forced to burn such fuels as cow dung, crop stems and husks, and weeds. Often this does no harm, since waste products such as cotton stalks are used. But the burning of dung and certain crop residues may in some cases rob the soil of needed nutrients. Eventually extreme fuel shortages can reduce the number of cooked meals and shorten the cooking time, which increases malnourishment.
68. Many urban people rely on wood, and most of this is purchased. Recently, as the price of wood fuels has been rising, poor families have been obliged to spend increasing proportions of their income on wood. In Addis Ababa and Maputo, families may spend a third to half of their incomes this way./60 Much work has been done over the past 10 years to develop fuel-efficient stoves, and some of these new models use 30-50 per cent less fuel. These, as well as aluminium cooking pots and pressure cookers that also use much less fuel, should be made more widely available in urban areas.
Fuelwood and charcoal are, and will remain, the major sources of energy for the great majority of rural people in developing countries. The removal of trees in both semiarid and humid land in African countries is a result to a large extent of increasing
73. Renewable energy sources could in theory provide 10-13TW annually - equal to current global energy consumption./63 Today they provide about 2TW annually, about 21 per cent of the energy consumed worldwide, of which 15 per cent is biomass and 6 per cent hydropower. However, most of the biomass is in the form of fuelwood and agricultural and animal wastes. As noted above, fuelwood can no longer be thought of as a 'renewable' resource in many areas, because consumption rater have overtaken sustainable yields.
74. Although worldwide reliance on all these sources has been growing by more than 10 per cent a year since the late 1970s, it will be some time before they make up a substantial portion of the world's energy budget. Renewable energy systems are still in a relatively primitive stage of development. But they offer the world potentially huge primary energy sources, sustainable in perpetuity and available in one form or another to every nation on Earth. But it, will require a substantial and sustained commitment to further research and development if their potential is to be realized.
75. Wood as a renewable energy source is usually thought of as naturally occurring trees and shrubs harvested for local domestic use. Wood, however, is becoming an important feedstock, specially grown for advanced energy conversion processes in developing as well as industrial countries for the product ion of process heat, electricity, and potentially for other fuels, such as combustible gases and liquids.
76. Hydropower, second to wood among the renewables, has been expanding at nearly 4 per cent annually. Although hundreds of thousands of megawatts of hydropower have been harnessed throughout the world, the remaining potential is huge./64 In neighbouring developing countries, interstate cooperation in hydropower development could revolutionize supply potential especially in Africa.
In the choice of resources to be utilized we should not stare at renewable resources of energy blindly, we should not blow it out of proportion, we should not promote it for the sake of the environment per se. Instead we should develop and utilize all resources available, renewable sources of energy included, as a long-term endeavour requiring a continuous and sustained effort that will not be subject to short-term economic fluctuations, in order that we, in Indonesia, will achieve a successful and orderly transition to a more diversified and balanced structure of energy supply and environmentally sound energy supply system, which is the ultimate goal of our policy.
Speaker from the floor
77. Solar energy use is small globally, but it is beginning to assume an important place in the energy consumption patterns of some countries. Solar water and household heating is widespread in many parts of Australia, Greece, and the Middle East. A number of East European and developing countries have active solar energy programmes, and the United States and Japan support solar sales of several hundred million dollars a year. With constantly improving solar thermal and solar electric technologies, it is likely that their contribution will increase substantially. The cost of photovoltaic equipment has fallen from around $500-600 per peak watt to $5 and is approaching the $1-2 level where it can compete with conventional electricity production./65 But even at $5 per peak watt, it still provides electricity to remote places more cheaply than building power lines.
78. Wind power has been used for centuries - mainly for pumping water. Recently its use has been growing rapidly in regions such as California and Scandinavia. In these cases the wind turbines are used to generate electricity for the local electricity grid. The costs of wind-generated electricity, which benefited initially from substantial tax incentives, have fallen dramatically in California in the last five years and may possibly be competitive with other power generated there within a decade./66 Many countries have successful but small wind programmes, but the untapped potential is still high.
79. The fuel alcohol programme in Brazil produced about 10 billion litres of ethanol from sugar-cane in 1984 and replaced about 60 per cent of the gasoline that would have been required./67 The cost has been estimated at $50-60 per barrel of gasoline replaced. When subsidies are removed, and a true exchange rate is used, this is competitive at 1981 oil prices. With present lower oil prices, the programme has become uneconomical. But it saves the nation hard currency, and it provides the additional benefits of rural development, employment generation, increased self-reliance, and reduced vulnerability to crises in the world oil markets.
80. The use of geothermal energy, from natural underground heat sources, has been increasing at more than 15 per cent per year in both industrial and developing countries. The experience gained during the past decades could provide the basis for a major expansion of geothermal-capacity./68 By contrast, technologies for low-grade heat via heat pumps or from solar ponds and ocean thermal gradients are promising but still mostly at the research and development stage.
81. These energy sources are not without their health and environment risks. Although they range from rather trivial to very serious problems, public reactions to them are not necessarily in proportion to the damage sustained. For instance, some of the commonest difficulties with solar energy are, somewhat surprisingly, the injuries from roof falls during solar thermal maintenance and the nuisance of sun-glare off their glass surfaces. Or a modern wind turbine can be a significant noise nuisance to people living nearby. Yet, these apparently small problems often arouse very strong public reactions.
82. But these are still minor issues compared with the ecosystem destruction at hydropower sites or the uprooting of homesteads in the areas to be flooded, as well as the health risks from toxic gases generated by rotting submerged vegetation and soils, or from waterborne diseases such as schistosomiasis (snail fever). Hydrodams also act as an important barrier to fish migration and frequently to the movement of land animals. Perhaps the worst problem they pose is the danger of catastrophic rupture of the dam-wall and the sweeping away or flooding of human settlements downstream - about once a year somewhere in the world. This risk is small but not insignificant.
83. One of the most widespread chronic problems is the eye and lung irritation caused by woodsmoke in developing countries. When agricultural wastes are burned, pesticide residues inhaled from the dusts or smoke of the crop material can be a health problem. Modern biofuel liquids have their own special hazards. Apart from competing with food crops for good agricultural land, their production generates large quantities of organic waste effluent, which if not used as a fertilizer can cause serious water pollution. Such fuels, particularly methanol, may produce irritant or toxic combustion products. All these and many other problems, both large and small, will increase as renewable energy systems are developed.
84. Most renewable energy systems operate best at small to medium scales, ideally suited for rural and suburban applications. They are also generally labour-intensive, which should be an added benefit where there if surplus labour. They are less susceptible than fossil fuels to wild price fluctuations and foreign exchange costs. Most countries have some renewable resources, and their use can help nations move towards self-reliance.
85. The need for a steady transition to a broader and more sustainable mix of energy sources is beginning to become accepted. Renewable energy sources could contribute substantially to this, particularly with new and improved technologies, but their development will depend in the short run on the reduction or removal of certain economic and institutional constraints to their use. These are formidable in many countries. The high level of hidden subsidies for conventional fuels built into the legislative and energy programmes of most countries distorts choices against renewables in research and development, depletion allowances, tax write-offs, and direct support of consumer prices. Countries should undertake a full examination of all subsidies and other forms of support to various sources of energy and remove those that are not clearly justified.
86. Although the situation is changing rapidly in some jurisdictions, electrical utilities in most have a supply monopoly on generation that allows them to arrange pricing policies that discriminate against other, usually small, suppliers./69 In some countries a relaxation of this control, requiring utilities to accept power generated by industry, small systems, and individuals, has created opportunities for the development of renewables. Beyond that, requiring utilities to adopt an end-use approach in planning, financing, developing, and marketing energy can open the door to a wide range of energy-saving measures as well as renewables.
87. Renewable energy sources require a much higher priority in national energy programmes. Research, development, and demonstration projects should command funding necessary to ensure their rapid development and demonstration. With a potential of 10TW or so, even if 3-4TW were realized, it would make a crucial difference to future primary supply, especially in developing countries, where the background conditions exist for the success of renewables. The technological challenges of renewables are minor compared with the challenge of creating the social and institutional frameworks that will ease these sources into energy supply systems.
88. The Commission believes that every effort should be made to develop the potential for renewable energy, which should form the foundation of the global energy structure during the 21st Century. A much more concerted effort must be mounted if this potential is to be realized. But a major programme of renewable energy development will involve large costs and high risks, particularly massive-scale solar and biomass industries. Developing countries lack the resources to finance all but a small fraction of this cost although they will be important users and possibly even exporters. Large-scale financial and technical assistance will therefore be required.
89. Given the above analysis, the Commission believes that energy efficiency should be the cutting edge of national energy policies for sustainable development. Impressive gains in energy efficiency have been made since the first oil price shock in the 1970s. During the past 13 years, many industrial countries saw the energy content of growth fall significantly as a result of increases in energy efficiency averaging 1.7 per cent annually between 1973 and 1983./70 And this energy efficiency solution costs less, by savings made on the extra primary supplies required to run traditional equipment.
90. The cost-effectiveness of 'efficiency' as the most environmentally benign 'source' of energy is well established. The energy consumption per unit of output from the most efficient processes and technologies is one-third to less than one-half that of typically available equipment./71
91. This is true of appliances for cooking, lighting and refrigeration, and space cooling and heating - needs that are growing rapidly in most countries and putting severe pressures on the available supply systems. It is also true of agricultural cultivation and irrigation systems, of the automobile, and of many industrial processes and equipment.
92. Given the large disproportion in per capita energy consumption between developed and developing countries in general, it is clear that the scope and need for energy saving is potentially much higher in industrial than in developing countries. Nonetheless, energy efficiency is important everywhere. The cement factory, automobile, or irrigation pump in a poor country is fundamentally no different from its equivalent in the rich world. In both, there is roughly the same scope for reducing the energy consumption or peak power demand of these devices without loss of output or welfare. But poor countries will gain much more from such reductions.
93. The woman who cooks in an earthen pot over an open fire uses perhaps eight times more energy than an affluent neighbour with a gas stove and aluminium pans. The poor who light their homes with a wick dipped in a jar of kerosene get one-fiftieth of the illumination of a 100-watt electric bulb, but use just as much energy. These examples illustrate the tragic paradox of poverty. For the poor, the shortage of money is a greater limitation than the shortage of energy. They are forced to use 'free' fuels and inefficient equipment because they do not have the cash or savings to purchase energy-efficient fuels and end-use devices. Consequently, collectively they pay much more for a unit of delivered energy services.
94. In most cases, investments in improved end-use technologies save money over time through lowered energy-supply needs The costs of improving the end-use equipment is frequently much less than the cost of building more primary supply capacity. In Brazil, for example, it has been shown that for a discounted total investment of $4 billion in more efficient end-use technologies (such as more efficient refrigerators, street-lighting, or motors) it would be feasible to defer construction of 21 gigawatts of new electrical supply capacity, corresponding to a discounted capital savings for new supplies of $19 billion in the period 1986 to 2000./72
We must change our attitude towards consumption goods in developed countries and we must create technological advances that will allow us to carry on economic development using less energy. We must ask ourselves can we solve the problems of underdevelopment without using or increasing the tremendous amount of energy used by these countries.
The idea that developing countries use very little energy is an incorrect idea. We find that the poorest countries of all have a different problem; their problem is inefficient use of energy. Medium countries such as Brazil use more efficient and modern sources of fuel. The great hope for these countries is that the future will be built not based on technologies of the past, but using advanced technology. This will allow them to leap forward in relation to countries that are already developed.
95. There are many examples of successful energy efficiency programmes in industrial countries. The many methods used successfully to increase awareness include information campaigns in the media, technical press, and schools; demonstrations of successful practices and technologies; free energy audits; energy 'labelling' of appliances; and training in energy-saving techniques. These should be quickly and widely extended. Industrialized countries account for such a large proportion of global energy consumption that even small gains in efficiency can have a substantial impact on conserving reserves and reducing the pollution load on the biosphere. It is particularly important that consumers, especially large commercial and industrial agencies, obtain professional audits of their energy use. This kind of energy 'book-keeping' will readily identify those places in their consumption patterns where significant savings can be made.
96. Energy pricing policies play a critical role in stimulating efficiency. At present, they sometimes include subsidies and seldom reflect the real costs of producing or importing the energy, particularly when exchange rates are undervalued. Very rarely do they reflect the external damage costs to health, property, and the environment. Countries should evaluate all hidden and overt subsidies to see how far real energy costs can be passed on to the consumer. The true economic pricing of energy - with safeguards for the very poor - needs to be extended in all countries. Large numbers of countries both industrial and developing are already adopting such policies.
97. Developing countries face particular constraints in saving energy. Foreign exchange difficulties can make it hard to purchase efficient but costly energy conversion and end-use devices. Energy can often be saved cost-effectively by fine-tuning already functioning systems./73 But governments and aid agencies may find it less attractive to fund such measures than to invest in new large-scale energy supply hardware that is perceived as a more tangible symbol of progress.
98. The manufacture, import, or sale of equipment conforming to mandatory minimal energy consumption or efficiency standards is one of the most powerful and effective tools in promoting energy efficiency and producing predictable savings. International cooperation may be required when such equipment is traded from nation to nation. Countries and appropriate regional organizations should introduce and extend increasingly strict efficiency standards for equipment and mandatory labelling of appliances.
99. Many energy efficiency measures cost nothing to implement. But where investments are needed, they are frequently a barrier to poor households and small-scale consumers, even when pay-back times are short. In these latter cases, special small loan or hire-purchase arrangements are helpful. Where investment costs are not insurmountable, there are many possible mechanisms for reducing or spreading the initial investment, such as loans with favourable repayment periods and 'invisible' measures such as loans repaid by topping up the new, reduced energy bills to the pre-conservation levels.
100. Transport has a particularly important place in national energy and development planning. It is a major consumer of oil, accounting for 50-60 per cent of total petroleum use in most developing countries./74 It is often a major source of local air pollution and regional acidification of the environment in industrial and developing countries. Vehicle markets will grow much more rapidly in developing countries, adding greatly to urban air pollution, which in many cities already exceeds international norms. Unless strong action is taken, air pollution could become a major factor limiting industrial development in many Third World cities.
101. In the absence of higher fuel prices, mandatory standards providing for a steady increase in fuel economy may be necessary. Either way, the potential for substantial future gains in fuel economy is enormous. If momentum can be maintained, the current average fuel consumption of approximately 10 litres per 100 kilometres in the fleet of vehicles in use in industrial countries could be cut in half by the turn of the century./75
102. A key issue is how developing countries can rapidly improve the fuel economy of their vehicles when these are, on average, used for twice as long those as in industrial countries, cutting rates of renewal and improvement in half. Licensing and import agreements should be reviewed to ensure access to the best available fuel efficient designs and production processes. Another important fuel-saving strategy especially in the growing cities of developing countries is the organizing of carefully planned public transport systems.
103. Industry accounts for 40 60 pet cent of all energy consumed in industrial countries and 10-40 per cent in developing countries. (See Chapter 6.) There has been significant improvement in the energy efficiency of production equipment, processes, and products. In developing countries, energy savings of as much as 20-30 per cent could be achieved by such skilful management of industrial development.
104. Agriculture worldwide is only a modest energy consumer, accounting for about 3.5 per cent of commercial energy use in the industrial countries and 4.5 per cent in developing countries as a whole./76 A strategy to double food production in the Third World through increases in fertilizers, irrigation, and mechanization would add 140 million tons of oil equivalent to their agricultural energy use. This is only some 5 per cent of present world energy consumption and almost certainly a small part of the energy that could be saved in other sectors in the developing world through appropriate efficiency measures./77
105. Buildings offer enormous scope for energy savings, and perhaps the most widely understood ways of increasing energy efficiency are in the home and workplace. Buildings in the tropics are now commonly designed to avoid as much direct solar heating as possible by having very narrow east- and west-facing walls, but with long sides facing north and south and protected from the overhead sun by recessed windows or wide sills.
106. An important method of heating buildings is by hot water produced during electricity production and piped around whole districts, providing both heat and hot water. This extremely efficient use of fossil fuels demands a coordination of energy supply with local physical planning, which few countries are institutionally equipped to handle./78 Where it has been successful, there has usually been local authority involvement in or control of regional energy-services boards, such as in Scandinavia and the USSR. Given the development of these or similar institutional arrangements, the cogeneration of heat and electricity could revolutionize the energy efficiency of buildings worldwide.
107. There is general agreement that the efficiency gains achieved by some industrialized countries over the past 13 years were driven largely by higher energy prices, triggered by higher oil prices. Prior to the recent fall in oil prices, energy efficiency was growing at a rate of 2.0 per cent annually in some countries, having increased gradually year by year./79
108. It is doubtful whether such steady improvements can be maintained and extended if energy prices are held below the level needed to encourage the design and adoption of more energy-efficient homes, industrial processes, and transportation vehicles. The level required will vary greatly within and between countries, depending on a wide range of factors. But whatever it is, it should be maintained. In volatile energy markets, the question is how.
109. Nations intervene in the 'market price' of energy in a variety of ways. Domestic taxes (or subsidies) on electrical power rates, oil, gas and other fuels are most common. They vary greatly between and even within countries where different states, provinces, and sometimes even municipalities have the right to add their own tax. Although taxes on energy have seldom been levied to encourage the design and adoption of efficiency measures, they can have that result if they cause energy prices to rise beyond a certain level - a level that varies greatly among jurisdictions.
110. Some nations also maintain higher than market prices on energy through duties on imported electricity, fuel, and fuel products. Others have negotiated bilateral pricing arrangements with oil and gas producers in which they stabilize prices for a period of time.
111. In most countries, the price of oil eventually determines the price of alternative fuels. Extreme fluctuations in oil prices, such as the world has experienced recently, endanger programmes to encourage conservation. Many positive energy developments worldwide that made sense with oil above $25 per barrel, are harder to justify at lower prices. Investments in renewables, energy-efficient industrial processes, transport vehicles, and energy-services may be reduced. Most are needed to ease the transition to a safer and more sustainable energy future beyond this century. This goal requires a long, uninterrupted effort to succeed.
112. Given the importance of oil prices on international energy policy, the Commission recommends that new mechanisms for encouraging dialogue between consumers and producers be explored.
113. If the recent momentum behind annual gains in energy efficiency is to be maintained and extended, governments need to make it an explicit goal of their policies for energy pricing to consumers. Prices needed to encourage the adoption of energy-saving measures may be achieved by any of the above means or by other means. Although the Commission expresses no preference, conservation pricing requires that governments take a long-term view in weighing the costs and benefits of the various measures. They need to operate over extended periods, dampening wild fluctuations in the price of primary energy, which can impair progress towards energy conservation.
114. It is clear that a low energy path is the best way towards a sustainable future. But given efficient and productive uses of primary energy, this need not mean a shortage of essential energy services. Within the next 50 years, nations have the opportunity to produce the same levels of energy services with as little as half the primary supply currently consumed. This requires profound structural changes in socio-economic and institutional arrangements and is an important challenge to global society.
115. More importantly, it will buy the time needed to mount major programmes on sustainable forms of renewable energy, and so begin the transition to a safer, more sustainable energy era. The development of renewable sources will depend in part on a rational approach to energy pricing to secure a stable matrix for such progress. Both the routine practice of efficient energy use and the development of renewables will help take pressure off traditional fuels, which are most needed to enable developing countries to realize their growth potential worldwide.
116. Energy is not so much a single product as a mix of products and services, a mix upon which the welfare of individuals, the sustainable development of nations, and the life-supporting capabilities of the global ecosystem depend. In the past, this mix has been allowed to flow together haphazardly, the proportions dictated by short-term pressures on and short-term goals of governments, institutions, and companies. Energy is too important for its development to continue in such a random manner. A safe, environmentally sound, and economically viable energy pathway that will sustain human progress into the distant future is clearly imperative. It is also possible. But it will require new dimensions of political will and institutional cooperation to achieve it.
1/ World Bank, World Development Report 1986 (New York: Oxford University Press, 1986).
2/ British Petroleum Company, BP Statistical Review of World Energy (London: 1986).
3/ Medium variant in Department of International Economic and Social Affairs, World Population Prospects as Assessed in 1980, Population Studies No. 78 (Annex), and Long Range Population Projections of the World and Major Regions 2025-2150, Five Variants as Assessed in 1980 (New York: UN, 1981).
4/ For a useful comparison of various scenarios, see J. Goldemberg et al., 'An End-Use Oriented Global Energy strategy', Annual Review of Energy, Vol. 10, 1985; and W. Keepin et al., 'Emissions of CO2 into the Atmosphere', in B. Bolin et al. (eds.), The Greenhouse Effect, Climate Change and Ecosystems (Chichester, UK: John Wiley & Sons, 1986).
5/ U. Colombo and O. Bernadini, 'A Low Energy Growth Scenario and the Perspectives for Western Europe', Report for the Commission of the European Communities Panel on Low Energy Growth, 1979.
6/ Goldemberg et al., 'Global Energy Strategy', op. cit.
7/ A.B. Lovins et al., 'Energy Strategy for Low Climatic Risk', Report for the German Federal Environment Agency, 1981.
8/ J.A. Edmonds et al., 'An Analysis of Possible Future Atmospheric Retention of Fossil Fuel CO2', Report for U.S. Department of Energy, DOE/OR/21400 1, Washington, DC, 1984.
9/ J-R Frisch (ed.), Energy 2000-2020: World Prospects and Regional Stresses, World Energy Conference (London: Graham and Trotman, 1983).
10/ Energy Systems Group of the International Institute for Applied Systems Analysis, Energy in a Finite World - A Global Systems Analysis (Cambridge, Mass.: Ballinger, 1981).
11/ World Bank, The Energy Transition in Developing Countries (Washington, DC: 1983).
12/ World Meteorological Organization, A Report of the International Conference on the Assessment of the Role of Carbon Dioxide and of Other Greenhouse Gases in Climate Variations and Associated Impacts, Villach, Austria, 9-15 October 1985, WMO No. 661 (Geneva: WMO/ICSU/UNEP, 1986).
13/ B.N. Lohani, 'Evaluation of Air Pollution control Programmes and Strategies in Seven Asian Capital Cities', prepared for WCED, 1986; H. Weidner, 'Air Pollution Control Strategies and Policies in the Federal Republic of Germany', prepared for WCED, 1986; M. Hashimoto, 'National Air quality Management Policy of Japan', prepared for WCED, 1985; CETESB, 'Air Pollution Control Programme and Strategies in Brazil - Sao Paulo and Cubatao Areas, 1985', prepared for WCED, 1985.
14/ National Research Council, Acid Deposition: Long Term Trends (Washington, DC: National Academy Press, 1985); L.P. Muniz and H. Leiverstad, 'Acidification Effects on Freshwater Fish', in D. Drablos and A. Tollan (eds.), Ecological Impact of Acid Precipitation (Oslo: SNSF, 1980); L. Hallbacken and C.O. Tamm, 'Changes in Soil Acidity from 1927/ to 1982- 4 in a Forest Area of South West Sweden', Scandinavian Journal of Forest Research, No. 1, pp. 219-32, 1986.
15/ FAO, Fuelwood Supplies in the Developing Countries, Forestry Paper No. 42 (Rome: 1983); Z. Mikdashi, 'Towards a New Petroleum Order', Natural Resources Forum, October 1986.
16/ Edmonds et al., op. cit.
17/ I.M. Torrens, 'Acid Rain and Air Pollution, A Problem of Industrialization', prepared for WCED, 1985.
18/ Goldemberg et al., 'Global Energy Strategy', op. cit.
19/ British Petroleum Company, op. cit.
20/ WMO, Report of International Conference, op. cit.; I. Mintzer, 'Societal Responses to Global Warming', submitted to WCED Public Hearings, Oslo, 1985; F.K. Hare, 'The Relevance of Climate', submitted to WCED Public Hearings, Ottawa, 1986.
21/ Lohani, op. cit.; Weidner, op. cit.; Hashimoto, op. cit.; CETESB, op. cit.
22/ Torrens, op. cit.; F. Lixun and D. Zhao, 'Acid Rain in China', prepared for WCED, 1985; H. Rodhe, 'Acidification in Tropical Countries', prepared for WCED, 1985; G.T. Goodman, 'Acidification of the Environment, A Policy Ideas Paper', prepared for WCED, 1986.
23/ Torrens, op. cit.
24/ Bolin et al., op. cit.
25/ WMO, Report of International Conference, op. cit.
28/ Goldemberg et al., 'Global Energy Strategy', op. cit.
29/ Mintzer, op. cit.
30/ WMO, Report of International Conference, op. cit.
31/ D.J. Rose et al., Global Energy Futures and CO2 - Induced Climate Change, MITEL Report 83-015 (Cambridge, Mass.: Massachusetts Institute of Technology, 1983); A.M. Perry et al., 'Energy Supply and Demand Implication of CO2', Energy, Vol. 7, pp. 991-1004, 1982.
32/ Bolin et al., op. cit.
33/ G. Brasseur, The Endangered Ozone Layer: New Theories on Ozone Depletion', Environment, Vol. 29, No. 1, 1987.
34/ National Research Council, op. cit.; Muniz and Leiverstad, op. cit.
35/ OECD, The State of the Environment (Paris: 1985).
36/ Muniz and Leiverstad, op. cit.
37/ National Research Council, op. cit.
38/ National Swedish Environmental Protection Board, Air Pollution and Acidification (Solna, Sweden, 1986).
39/ J. Lehmhaus et al., 'Calculated and Observed Data for 1980 Compared at EMEP Measurement Stations', Norwegian Meteorological Institute, EMEP/MSO W Report 1 86, 1986; C.B. Epstein and M. Oppenheimer, 'Empirical Relation Between Sulphur Dioxide Emissions and Acid Deposition Derived from Monthly Data', Nature, No. 323, pp. 245-47, 1985.
40/ 'Neuartige Waldschaden in der Bundesrepublik Deutschland', Das Bundesministerium fur Ernahrung, Landwirtschaft und Forsten, 1983; 'Waldschaden Sernebungen', Das Bundesministerium fur Ernahrung, Landwirtschaft und Forsten, 1985; S. Nilsson, 'Activities of Teams of Specialists: Implications of Air Pollution Damage to Forests for Roundwood Supply and Forest Products Markets: Study on Extent of Damage', TIM/R 124 Add.1 (Restricted), 1986.
41/ S. Postel, 'Stabilizing Chemical Cycles' (after Allgemeine Forst Zeitschrift, Nos. 46 (1985) and 41 (1986)); in L.R. Brown et al., State of the World 1987 (London: W.W. Norton, 1987).
42/ T. Paces, 'Weathering Rates of Eneiss and Depletion of Exchangeable Cations in Soils Under Environmental Acidification', Journal Ecological Society, No. 143, pp. 673-77, 1986; T. Paces, 'Sources of Acidification in Contra] Europe Estimated from Elemental Budgets in Small Basins', Nature, No. 315, pp. 31-36, 1985.
43/ Hallbacken and Tamm, op. cit.
44/ G. Tyler et al., 'Metaller i Skogsmark - Deposition och omsattning', SNV PM 1692, Solna, Sweden, 1983.
45/ 'Neuartige Waldschaden', 1983, op. cit; Paces, 'Weathering Rates', op. cit.
46/ Rodhe, op. cit,
47/ R. Eden et al., Energy Economics (New York: Cambridge University Press, 1981); Nuclear Energy Agency, Projected Costs of Generating Electricity from Nuclear and Coal-Fired Power Stations for Commissioning in 1995 (Paris: OECD, 1986).
48/ Nuclear Regulatory Commission, Physical Processes in Reactor Meltdown Accidents, Appendix VIII to Reactor Safety Study (WASH-1400) (Washington, DC: U.S. Government Printing Office, 1975).
49/ S. Islam and K. Lindgren, 'How many reactor accidents will there be?', Nature, No. 122, pp. 691-92, 1986; A.W.K. Edwards, 'How many reactor accidents?' Nature, No. 324, pp 417-18, 1986.
50/ F.L. Parker et al., The Disposal of High Level Radioactive Waste - 1984, Vols. 1 & 2 (Stockholm: The Beijer Institute, 1984); F.L. Parker and R.E. Kasperson, International Radwaste Policies (Stockholm: The Beijer Institute, in press).
51/ International Atomic Energy Agency, Nuclear Power: Status and Trends, 1986 Edition (Vienna: 1986).
52/ 'World List of Nuclear Power Plants', Nuclear News, August 1986.
53/ IAEA Bulletin, Summer 1986.
54/ C, Flavin, 'Reassessing Nuclear Power', in Brown et al., op. cit.; British Petroleum Company, op. cit.
55/ G. Foley, 'Wood Fuel and Conventional Fuel Demands in the Developing World', Ambio, Vol. 14 No. 5, 1985.
56/ FAO, Fuelwood Supplies, op. cit.; FAO/UNEP, Tropical Forest Resources, Forestry Paper No. 30 (Rome: 1982).
57/ The Beijer Institute, Energy, Environment and Development in Africa, Vols, 1-10 (Uppsala, Sweden: Scandinavian Institute of African Studies, 1984 87); 'Energy Needs in Developing Countries', Ambio, Vol. 14, 1985; E.N. Chidumayo, 'Fuelwood and Social Forestry', prepared for WCED, 1985; G.T. Goodman, 'Forest-Energy in Developing Countries: Problems and Challenges', International Union of Forest Research Organizations, Proceedings, Ljubljana, Yugoslavia, 1986.
58/ FAO, Fuelwood Supplies, op. cit.
59/ Beijer Institute, op. cit.; J. Bandyopadhyay, 'Rehabilitation of Upland Watersheds', prepared for WCED, 1986.
60/ Beijer Institute, op. cit.
61/ R. Overend, 'Bioenergy Conversion Process: A Brief State of the Art and Discussion of Environmental Implications', International Union of Forestry Research Organization, Proceedings, Ljubljana, Yugoslavia, 1986.
62/ W. Fernandes and S. Kulkarni (eds.), Towards a New Forest Policy: People's Rights and Environmental Needs (New Delhi, India: Indian Social Institute, 1983); P.N. Bradley et al., 'Development Research and Energy Planning in Kenya', Ambio, Vol. 14, No. 4; R. Hosier, 'Household Energy Consumption in Rural Kenya', Ambio, Vol 14, No. 4, 1985; 1985; R. Engelhard et al., 'The Paradox of Abundant On-Farm Woody Biomass, Yet Critical Fuelwood Shortage: A Case Study of Kakamega District (Kenya)', International Union of Forest Research Organization, Proceedings, Ljubljana, Yugoslavia, 1986.
63/ D. Deudney and C. Flavin, Renewable Energy: The Power to Choose (London: W.W. Norton, 1983).
64/ World Resources Institute/International Institute Environment and Development, World Resources 1987 (New York, Basic Books, in press).
67/ Goldemberg et al., 'Global Energy Strategy', op. cit.; J. Goldemberg et al., 'Ethanol Fuel: A Use of Biomass Energy in Brazil', Ambio, Vol. 14, pp. 293-98, 1985; J. Goldemberg et al., 'Basic Needs and Much More, With One Kilowatt Per Capita', Ambio, Vol. 14, pp. 190-201, 1985.
68/ WRI/IIED, op. cit.
69/ N.J.D. Lucas, 'The Influence of Existing Institutions on the European Transition from Oil', The European, pp. 173-89, 1981.
70/ OECD, op. cit.
71/ E. Hirst et al., 'Recent Changes in U.S. Energy Consumption, What Happened and Why?' in D.J. Rose (ed.), Learning About Energy (New York: Plenum Press, 1986).
72/ H.S. Geller, 'The Potential for Electricity Conservation in Brazil', Companhia Energetica de Sao Paulo, Sao Paulo, Brazil, 1985.
73/ World Bank, Energy Transition in Developing Countries, op. cit.
74/ G. Leach et al., Energy and Growth; A Comparison of Thirteen Industrialized and Developing Countries (London: Butterworth, 1986).
75/ MIT International Automobile Program, The Future of the Automobile (London: George Allen & Unwin, 1984).
76/ FAO, Agriculture; Towards 2000 (Rome: 1981).
78/ Lucas, op. cit.
79/ OECD, op. cit.
|
, September 30, 2010 (ENS) - Solar electric cells built at nano-scale have the potential to generate huge amounts of electricity compared to existing solar cells, say Stanford engineers.
Ultra-thin solar cells can absorb sunlight more efficiently than the thicker, more expensive silicon cells used today, because light behaves differently at scales around a nanometer, which measures just one billionth of a meter, the scientists said.
In research published online this week by the journal "Proceedings of the National Academy of Sciences," the scientists generated much more electricity from sunlight with nano-thin solar cells as with the most efficient silicon solar cells.
The scientists calculate that an organic polymer thin film can absorb as much as 10 times more energy from sunlight than previously thought possible if it is sandwiched between several thin layers of films of carefully calibrated thicknesses that hold the light using a technique called "light trapping."
Diagram of a thin film organic solar cell shows the top layer, a patterned, roughened scattering layer, in green. The organic thin film layer, in red, is where light is trapped and electrical current is generated. (Diagram courtesy Proceedings of the National Academy of Sciences)
"The longer a photon of light is in the solar cell, the better chance the photon can get absorbed," said Shanhui Fan, Stanford associate professor of electrical engineering and senior author of the paper.
The key lies in keeping sunlight in the solar cell long enough to get the maximum amount of energy from it, say Fan and his colleagues.
Light trapping has been used for several decades with silicon solar cells. It is done by roughening the surface of the silicon to cause incoming light to bounce around inside the cell after it penetrates, rather than reflecting right back out as it does off a mirror.
But over the years, no matter how much researchers tweaked the technique, the efficiency of conventional silicon solar cells never rose beyond a certain amount - a physical limit related to the speed at which light travels within a given material.
But light has a dual nature, sometimes behaving as a solid particle, called a photon, and other times as a wave of energy.
Fan and postdoctoral researcher Zongfu Yu decided to explore whether the conventional limit on light trapping held true in a nanoscale setting.
"We all used to think of light as going in a straight line," Fan said. "For example, a ray of light hits a mirror, it bounces and you see another light ray. That is the typical way we think about light in the macroscopic world."
"But if you go down to the nanoscales that we are interested in, hundreds of millionths of a millimeter in scale, it turns out the wave characteristic really becomes important," he said.
Visible light has wavelengths around 400 to 700 nanometers, but even at that small scale, Fan said, many of the structures that Yu analyzed had a theoretical limit like the conventional limit proven by experiment.
"One of the surprises with this work was discovering just how robust the conventional limit is," Fan said.
It was only when Yu began investigating the behavior of light inside a material of deep subwavelength-scale - much smaller than the wavelength of the light - that it became evident to him that light could be confined for a longer time, increasing energy absorption beyond the conventional limit at the macroscale.
"The amount of benefit of nanoscale confinement we have shown here really is surprising," said Yu, lead author of the paper. "Overcoming the conventional limit opens a new door to designing highly efficient solar cells."
Yu found success when he sandwiched the organic thin film between two layers of material that acted as confining layers once the light passed through the upper one into the thin film.
On top of the upper layer, he placed a patterned rough-surfaced layer designed to send the incoming light off in different directions as it entered the thin film.
By varying the characteristics of the different layers, Yu was able to achieve a 12-fold increase in the absorption of light within the thin film, compared to the conventional limit.
Their method of generating electricity is cost-effective, the Stanford scientists say. Nanoscale solar cells would cost less to manufacture than silicon cells as the organic polymer thin films and other materials used are less expensive than silicon and, being nanoscale, the quantities of material required for the cells are small.
The organic materials also have the advantage of being manufactured in chemical reactions in solution. They don't need high-temperature or vacuum processing, as is required for silicon manufacture.
"Most of the research these days is looking into many different kinds of materials for solar cells," Fan said. "Where this will have a larger impact is in some of the emerging technologies; for example, in organic cells. If you do it right, there is enormous potential associated with it."
Aaswath Raman, a graduate student in applied physics, also worked on the research and is a coauthor of the paper.
The project was supported by funding from the King Abdullah University of Science and Technology, which supports the Center for Advanced Molecular Photovoltaics at Stanford, and by the U.S. Department of Energy.
Copyright Environment News Service (ENS) 2010. All rights reserved.
|Let's Keep the Upper Lillooet River Wild! Three-time EUEC Keynote Speaker Gina McCarthy Confirmed to Head the EPA Aquaponics Revolutionizes Local Food Growing by Recycling 90% Water|
|
Carbon dioxide gases can be released in many ways, such as transport, land clearance, and the production and consumption of food, fuels, manufactured goods, materials, wood, roads, buildings, and services.
Once the size of a carbon footprint is known, a plan can be formulated to reduce it using methods such as technological developments, better process and product management, consumption strategies, and alternative projects, such as carbon offsetting, which includes solar or wind energy or reforestation.
Carbon footprints are affected by size of a population and their economic output. Individuals and businesses look at these main factors when trying to decrease their carbon footprint. Researchers advise the most useful way to reduce a carbon footprint is to cut the amount of energy required for production or to decrease the dependence on carbon producing fuels.
|
Facts about Ebola:
- Ebola is considered as EHF (Ebola hemorrhagic fever) is a disease caused by Ebola virus.
- The current outbreak of Ebola is severe in West Africa and travel warning to the infected areas is also given to everyone.
- Ebola virus found in infects wild animals like fruit bats, monkeys, gorillas, and chimpanzees. These animals are the host of Ebola virus.
- Ebola main symptom is bleeding from nose, eyes and even blood in vomit that makes doctors to give fluids and oxygen to maintain the patient’s blood pressure.
- No vaccine to prevent Ebola.
- Ebola is fatal disease.
Guinea – 1350 cases, 778 deaths
Liberia – 4076 cases, 2316 deaths
Nigeria – 20 cases, 8 deaths
Senegal – 1 case, 0 deaths (infection originated in Guinea)
Sierra Leone – 2950 cases, 930 deaths
What is Ebola Virus?
Ebola is the disease caused by Ebola Virus was first originated near Ebola River in Democratic Republic of Congo and it may spread in all Africa and other countries. According to WHO, the current outbreak is West Africa is the largest and complex outbreak that causes so many deaths too.
How Ebola is transmitted?
It comes from the family of fruit bats that are considered as host of this Ebola virus. Ebola comes into humans through close contact with the fluids of body of infected animals like chimpanzee, monkeys, gorilla etc.After that it is transmitted from human to human in the same way like blood secretion or contact with broken skin. Ebola doesn’t transmit from air or water.
Symptoms of Ebola:
The symptoms of Ebola makes you feel by losing of the immune system. It is like a flu that usually show up for 2-3 weeks. Breaking of immune system consists of High fever, weakness, bleeding from nose, ears and eyes as well as inside. It causes rashes, diarrhea, vomiting, cough, headache, body ache, lack of appetite.
Diagnoses of Ebola:
Blood Test can tell you that whether you are victim or not of Ebola, if it is found Ebola then separate the patient from public to prevent the spread.
Treatment of Ebola:
Yet there is no cure for Ebola still many doctors and researchers are trying to find the cure. To fight with the symptoms of the Ebola they give a treatment includes fluids that will destroy the infected cells. It includes treatment of high fever along with the blood pressure.
Prevention of Ebola:
The best prevention method is to avoid going to the areas where Ebola virus is found. Yet there is no vaccine available for Ebola, you can prevent infection by putting mask and goggles when you move to infected area.
Healthcare is our motive.Stay healthy.Your comments and suggestions are most welcome.
|
Gas chromatography is a chromatographic technique that can be used to separate volatile organic compounds. A gas chromatograph consists of a flowing mobile phase, an injection port, a separation column containing the stationary phase, and a detector. The organic compounds are separated due to differences in their partitioning behavior between the mobile gas phase and the stationary phase in the column.
Mobile phases are generally inert gases such as helium, argon, or nitrogen. The injection port consists of a rubber septum through which a syringe needle is inserted to inject the sample. The injection port is maintained at a higher temperture than the boiling point of the least volatile component in the sample mixture. Since the partitioning behavior is dependant on temperture, the separation column is usually contained in a thermostat-controlled oven. Separating components with a wide range of boiling points is accomplished by starting at a low oven temperture and increasing the temperture over time to elute the high-boiling point components. Most columns contain a liquid stationary phase on a solid support. Separation of low-molecular weight gases is accomplished with solid adsorbents. Separate documents describe some specific GC Columns and GC Detectors.
Schematic of a gas chromatograph
Pictures of some gas chromatographs
Science Hypermedia Home Page
Copyright © 1996 by Brian M. Tissue
/chem-ed/sep/gc/gc.htm, updated 6/13/96
|
For those just learning the English language, there are many words that sound alike and may be confused by ESL students. These words often sound the same, and are spelled similarly, but are just different enough to cause confusion, even among native English speakers.
Words that having similar sounds are called homonyms. Within the category of homonyms are two commonly confused concepts: homographs and homophones.
While homographs can confuse those new to the English language, homophones typically pose more challenges. Consider some of the most commonly confused terms by English and non-English speakers alike, and how to try to differentiate these terms:
The term "effect" is a noun, and it means the result or product of something. The term "affect" is a verb, and it means to influence something. Effect and affect are perhaps two of the most common words that sound alike and may be confused by ESL students.
To help differentiate affect and effect, remember that affect is a verb that shows an action – the "a" in action and the "a" in affect should help you remember that affect is an action, a verb.
Another set of words that sound alike and may be confused by ESL students is "their," "there," and "they’re." Their is a possessive, referring to something that belongs to them. There is where something is located, as in "the book is over there." The term "they’re" is a contraction of two other words, "they" and "are" or "were." It takes a lot of practice to keep these three words straight. One sentence that can help keep the three homophones straight is "Their fishing poles, which they’re using on the camping trip, are over there."
Another commonly confused set of words is "here" and "hear." Here, like there, refers to a location, as in "Your parents are coming over here." Hear, on the other hand, refers to a sense, where we use our ears to listen. The simplest way not to confuse these two words is to remember that hear has the word "ear" in it, and we use our ears to hear.
The words "two," "to," and "too" are also very commonly confused when learning to write in English, even among those who grew up speaking the language. Two is the number 2 spelled out. To is a preposition, as in coming to a place. Too is an adverb, which means in addition to, or also.
The words "accept" and "except" are also homophones. Accept is a verb that means to receive and consent to something. Except is a preposition that means something is to be excluded.
There are several additional examples of homophones, words that sound alike and may be confused by ESL students. Students are encouraged to practice reading, writing, and using these words in various contexts to assist in their mastery.
|
A view of Earth's Eastern HemisphereOur own planet, Earth, is the largest of the four inner, or terrestrial, planets. It is the only world where liquid water is known to exist. About 71% of its surface is taken up by oceans. Water is also present as droplets or ice particles that make up the clouds, as vapour in the atmosphere and as ice in polar areas or on high mountains. Liquid water is essential for the existence of life on Earth, unique for any world in the Solar System. Its distance from the Sun—neither too close nor too far—produces exactly the right temperature range.
Earth in space
Earth speeds along at about 30 kilometres (18.5 miles) per second, taking 365.26 days (a year) to complete one orbit. As it goes, it spins on its axis like a top once every 24 hours. This makes the Sun appear to rise at dawn, pass across the sky and set at dusk, giving us day and night. Earth is itself orbited by the Moon, which takes 27.3 days to go round it.
Earth orbits the Sun at a distance where temperatures are just right to maintain liquid water on its surface. This is the called the "Goldilocks zone"—neither too hot nor too cold, but just right.
Find the answer
|
How to Make a classic science experiment volcano project
Make Your Own Erupting Volcano!
This is a classic experiment and it is very easy to do at home. So after you watch it - TRY IT! All you need is some kind of volcano that you can make, and then a little vinegar and some baking soda from the supermarket. This demonstration shows an acid base reaction. In this kind of reaction, the acid (vinegar) chemically reacts with the base (the baking soda) and the two release carbon dioxide gas which bubbles out. The liquid soap helps to make foamy lava that flows.
YOU WILL NEED:
- A volcano - Talk to an art teacher about making a volcano out of paper mache or plaster. You can also use clay or if you're in a hurry to make your volcano, use a mound of dirt outside.
- A container that 35mm film comes in or similar size container.
- Red and yellow food coloring (optional)
- Liquid dish washing soap
WHAT TO DO:
1. Go outside or prepare for some clean-up inside
2. Put the container into the volcano at the top
3. Add two spoonfuls of baking soda
4. Add about a spoonful of dish soap
5. Add about 5 drops each of the red and yellow food coloring
Now for the eruption!:
6. Add about an ounce of the vinegar into the container and watch what your volcano come alive.
|
The Haitian Revolution (1791–1804) was a conflict in the French colony of Saint-Domingue, leading to the end of slavery and Haiti as the first modern republic ruled by Africans. The main leaders were former slaves Toussaint Louverture and Jean-Jacques Dessalines. The Haitian Revolution led to the second nation in the Americas (after the United States) formed from a European colony. It was because of Toussaint Louverture that slavery was abolished in Haiti by:
1) He assembled 20,000 fighting men, provided training, ammunition and discipline.
2) Trade with the USA; which allowed to him to export commodities and ammunition.
3) Military Alliances; he made alliances with France, Spain and Coloreds to obtain trading and ammunition.
4) Tactics; Toussaint burnt towns, threw corpses into wells and engaged opponents in the wet season.
5) Ideology; he inspired black people to pursue liberty at all costs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.