content
stringlengths 275
370k
|
---|
Going against what we thought was true, a controversial new Nature study has shown that nerve cell regeneration in a part of our brains called the dentate gyrus seems to halt when we hit the age of 13, suggesting that once we lose these cells to things like disease and aging, they do not come back.
The dentate gyrus is part of the hippocampus and is important in the formation of memories. Meanwhile, the hippocampus is an area of the brain central to emotion, memory, and the autonomic nervous system, which controls unconscious bodily functions like digestion.
Some previous studies have found that hippocampal cell regeneration does decline with age, while others have suggested that the cells regenerate well into adulthood, with one study even claiming that the human hippocampus gains 700 new neurons each day. However, the team behind the new study point out various methodological issues in previous research, such as the type of marker proteins used, that likely led to misreported results.
What’s more, many previous studies have focused on non-human animals, particularly rodents, which although mammals, are quite different from us.
To avoid the problem of using potentially misrepresentative non-human animals, as well as the ethical implications of studying live human brains, the researchers used a total of 59 human brain samples that had either been removed post-mortem or during surgery. These samples ranged in age from a 14-week-old fetus to a 77-year-old man.
To investigate whether new cells were being formed in the dentate gyrus, the researchers looked for both young neurons and progenitor neurons. Progenitor cells are cells that can differentiate into a specific type of cell, similar to stem cells but with a more specific predetermined ending. Antibody markers were used to highlight the presence of immature neurons and progenitor cells.
Contrary to recent research, the team found that the number of developing neurons in the dentate gyrus reduces with age, coming to a complete halt around the age of 13. Brain samples from people aged 18 to 77 showed no signs of immature neurons in this area of the brain.
The researchers also found similar results in the brains of macaque monkeys, suggesting that a lack of nerve cell regeneration, or neurogenesis, within the hippocampus could be a feature of the primate brain.
The researchers also note that this phenomenon has been proposed for aquatic mammals, such as whales, dolphins, and porpoises, which like many primates exhibit intelligent, complex behaviors and have evolved large brains.
It is unclear exactly why these animals might experience a lack of hippocampal neurogenesis in adulthood, which has mainly been reported in various species of rodents, but it could be linked to having a large brain. Perhaps future research will tell us why. |
January 14, 1994: A NASA Hubble Space Telescope image of a region of the Great Nebula in Orion, as imaged by the Wide Field Planetary Camera 2.
This is one of the nearest regions of very recent star formation (300,000 years ago). The nebula is a giant gas cloud illuminated by the brightest of the young hot stars at the top of the picture. Many of the fainter young stars are surrounded by disks of dust and gas, that are slightly more than twice the diameter of the solar system (or 100 Astronomical Units in diameter).
Credit: C.R. O'Dell/ Rice University NASA |
SIBERIAN CRANE THREATENED BY ANCIENT FORM OF HUNTING IN PAKISTAN
By BAYARD WEBSTER
Published: May 17, 1983
IN a remote valley in northern Pakistan, tribesmen perform a colorful spring hunting rite: the hurling of weighted cords into the air to snare migrating cranes as they fly overhead.
The entangled cranes fall to earth, where they are either killed for food or caged for sale as pets. The traditional sport, centuries old, is practiced by only a few tribes. But it seems to be gaining in popularity.
Steven E. Landfried, an American expert on cranes and one of the few Westerners to visit the Kurram River Valley where tribesmen follow this ancient custom, recently observed the unusual methods by which the hunters capture the majestic birds, six or seven feet in wingspread.
Many hunters regard their lassoing-like actions as a form of recreation, which they sometimes turn into competition; others eat or sell the birds for sustenance or profit. But whatever their reasons, the tribesmens' aerial feats are believed partly responsible for bringing the Siberian crane to the verge of extinction. There are now only slightly more than 200 such cranes in the world.
Dr. Landfried reported that the hunters, working at night to keep flying cranes from easily spotting humans on the ground, place tamed and captive cranes in fields as decoys to attract the migrating birds. When the melodious bugle-like calls of approaching cranes are heard by the decoys, they respond in turn, luring the migrating birds to circle over them closer to the ground. On the ground the hunters, twirling lead-weighted ropes about their heads, hurl the cords as high as 100 feet in the air to snare the birds on the wing. Studying Cranes' Plight
Dr. Landfried, a researcher for the International Crane Foundation in Baraboo, Wis., is investigating the plight of endangered cranes in Asia for the United States Fish and Wildlife Service.
He and other crane researchers have found that three types of cranes are snared by the tribesmen: Common (Grus grus), Demoiselle (Anthropoides virgo) and the rare and endangered Siberian (Grus leucogeranus). The cranes cross Pakistan in the spring and fall on their way to their breeding grounds in the Soviet Union and Afghanistan and their wintering grounds in India.
Dr. Landfried hopes to encourage the Government of Pakistan, and other governments in the region, to devise and enforce conservation regulations that will protect the Siberian crane.
Illustrations: photo of a Pakistani tribesman photo of crane photo of lead weighted cord |
Nature is decidedly unappetizing in its discussion of how vegetables turn attackers against each other:
Integrative biologist John Orrock and his colleagues at the University of Wisconsin in Madison triggered a defensive reaction in tomato plants (Solanum lycopersicum) by exposing them to various amounts of methyl jasmonate (MeJA). This is an airborne chemical that plants release to alert each other to danger from pests. When cued with MeJA, tomato plants respond by producing toxins that make them less nutritious to insects.
The researchers then allowed caterpillars of a common pest, the small mottled willow moth (Spodoptera exigua), to attack the crop. Eight days later, they observed that plants more strongly cued with MeJA had lost less biomass compared with control plants or with ones that had received a weaker induction.
So they cued tomato plants with MeJA and then fed leaves from cued plants and non-cued control plants to single caterpillars in containers that also contained a set number of dead caterpillars. Two days later, the team observed that caterpillars fed with leaves from the treated plants had turned onto the dead larvae earlier, and had eaten more of them, than those fed with leaves from control plants. |
|devpsy.org > teaching > >|
A game provides students with the chance to learn language without any prior language aside from the ability to produce phonemes.
The lesson plans includes several "time outs" for mini-discussions about the parallels between the way the students are acquiring language and the ways that children acquire language.
Here are several files to help you play this game with your class: (1) language game shapes to cut out (pdf), (2) language game phoneme groups (pdf), (3) language game slides (ppt), and (4) language game notes as handout for students (pdf).
I designed this lesson plan introduce students in a developmental psychology class to language development. This is a first lesson on the topic that requires no background knowledge. It's a game where students learn to acquire language without and prior language aside from the ability to produce phonemes. This game is meant to be a fun and engaging introduction. It gives students an active role in their learning and it gives the instructor many opportunities in future class sessions to refer back to their experience.
The basic idea really comes from Ludwig Wittgenstein's Notebooks. He was a philosopher who, among other things, tried to figure out if the way we understand our language can tell us something about the way we ought to address philosophical questions. I usually begin class by telling my students that this is where I got the idea from. Every time I have taught this at least one student comes up to me after class to talk about the philosophy.
The basic idea is create an imaginary world where you students are in a community without language. Of course, an entire imaginary world would take so long to devise a language for, it's not at all feasible during a single class. So we simplify the world. One way which works well is to cut out many different shapes in different colors and sizes. To demonstrate the way language works, you should stand at the front of the room and make a picture with your shapes on the board. As you do this a volunteer or teaching assistant is at the opposite wall of the classroom with an identical collection of shapes. He doesn't turn around but he tries to make the same picture as you. The only way he can do this is by listening to you say what you're doing. You speak in English (or the language of your classroom) and you point out to the students how complicated it is: you need to talk about shapes and colors and relative positions and so on. We couldn't even do it perfectly and we're using English.
The students are given the exact same goal except they may not use any English! All they can use are phonemes (explain what they are and give a preview to infants development of phonemes)! To make the task a more constarined, the class will break into small groups and each will be given a unique list of phonemes. They are reminded that they're not allowed to use any english so they can't simply say, "Ugoo" means "red."
Students are given two packets of shapes, and list of their groups phonemes. A list of all english phonemes and sample groups are at the end of this document. They can do whatever they want so long as they don't speak a natural language and they only use their phonemes. I call "time in" to start these rules and "time out" for our brief pauses for mini lectures.
Your students will probably look at you completely dumbfounded! Some groups will probably figure out how to start but if not you will have to give them their first word. Simple walk up to a group, look at their phoneme list, combine two phonemes, point to a single shape they have and say the phoneme combination you made. That gets a group started because they invariably take your word as the name of the shape. Of course, you never gave them that interpretation so why did they think that? This is actually your first "time-out" for a mini-leecture/discussion of the "whole-object constraint."
Place any object that is mainly a single color on your table (exp: a pencil or the overhead projector) and call a time out. With everyone's attention point to the object and say something arbitrary (exp: goobar). Ask, "by a show of hands, how many of you thought "goobar" means [pencil - or your object] (most hands go up) Ask, "by a show of hanfs, how many of you though "goobar" means [yellow - or metal - or some prominent quality of your object]. (few hands go us). Ask the rhetorical question, "Why do you think you naturally believed the word means the object and not, for example, the object's qualities." Developmental psychologists have noticed children make the same assumptions you did when they're first learning words. It's called a "whole object constraint" because without any special information we think a new word means the whole object and not its part or qualities. When you started playing the game, did you use the whole object constraint? (short discussion) Call time in.
As children get close to two years of age, the start to learn words incredibly fast: about 10 to 20 new words a week! Many of these words are names for things like nouns. When you were playing the game, did you have a language explosion? Did you have a time when you suddenly acquired learned many more words? (short discussion)
When children learn words, it can be really difficult figuring out just what a word means. For example, a child might use the word "bear" only to talk about her favorite teddy bear. But lots of teddy-bears could be called "bears" so what she did is an "underextension." Another example might be a child who uses the word "car" to describe his family's car, all the other cars, trucks, bicylces, and even tricylcles!" What he did is anm "overestension." Do you find yourself making over-extensions or underextensions as you figured out your language?"
Write the following sentence on the board before calling time out: "The spy sees the police officer with the gun." Ask students, "By a show of hands, how may of you think the police officer has the gun?" (most hands go up) Now write directly underneath your sentence, the following sentence: "The spy sees the police officer with the binoculars." Asks students, "By a show of hands, how many of you think the spy has the binoculars." (most hands go up). Then look puzzled and ask, "By a show of hands, how many of you think the police officer has the binoculars." (few hands go up) Say something like, "Okay, I'm confused. This sentence (point to top sentence) has exactly the same grammar and almost all of the same words as this sentence (point to bottom setence). How come most of you felt the prepositional phrase "with the gun" means the police officer has the gun BUT you also felt the prepositional phrase "with the binoculars" means the spy has the binoculars?" (draw out from the class how they're using background knowledge. This will happen very naturally because you'll get responses like "because cops carry guns" and "the spy is seeing and binoculars are used to see far away." Give the following explanation and ask the following question: "So you used your background knowledge about police officers, spies, guns, and binoculars to understand the sentences. How we understand language by using the context and our background knowledge is called "pragmatics." Have you used any pragmatics so far while you were playing the game?" (short discussion) Call time in.
(As you finish up the game) How was playing the game? (they'll probably tell you it was challenging). Say something like, "Isn't it amazing that a two year old has a vocabulary of several hundred words and even a simple grammar. At the same time she can't even tie her own shoes!!! How do children learn language so rapidly? Many developmental psychologists who study language take a nativist perspective. Remember learning the basic developmental theories the first week of this class including nativism, the idea that children start life with basic concepts. One nativist idea of Noam Chomsky's is children have a module, a special part of the mind, that's entirely devoted to learning and using language.
Allot some time at the end of class for each group to show their langauge to the entire class. For each group, half of the students are speakers at one board and the other half are listeners at the other half. I've seen students have a wide range of responses. Those who weren't trying very much during the game just kind of fumble through this activity or find 'short-cuts' that ignore the spirit of the game. Other students who were really trying have gotten really excited when they see how much they were able to communicate. My personal feeling is to let this activity be self-reinforcing where the whole class indicates informally how they feel about each group's performance. For example, I always express how impressed I am with their pictures and when they take those 'short-cuts' I just say, "lame!!!"
|devpsy.org > teaching > >|
|K. H. Grobman||© 2003 - 2008| |
Read at : Google Alerts – desertification
The Issue of Desertification in Africa
The desertification of the vegetation and cultivable soil in Africa is threatening the livelihood of the people and continent’s ecosystem every year. The Sahara, the world’s hottest and third largest desert, is expanding at a rate of more than 40 kilometers per year. Just below the Sahara in parts of coastal Africa, the decline of vegetation and cultivable soil is also affected in relation to the uncontrolled exploitation of the mangrove forests.
A mangrove forest is a habitat to a number of diverse plant and animal species and plays an important role in maintaining balance to the ecosystem. Due to poverty and basic needs, the mangrove forests are stripped for their wood, leading to the decrease of biodiversity and eventual desertification in the coastal zones as well as the salinization of land, rendering unusable for agriculture.
In able to combat desertification, several countries in Africa have employed countermeasures and prevention in halting the expansion of desertification in Africa. Local populations now rely on reforestation coupled with increased awareness.
Environmental awareness and education will inform the local regarding the dangers of desertification and how their uncontrolled exploitation of vegetation and wildlife can contribute to the dangers that would eventually deal a far greater harm to them. After information dissemination, reforestation activities are greatly encouraged, spearheading the locals to plant seedlings in severely deforested areas during the rainy season.
Desertification occurs when a dry land region becomes unfit for cultivation and agriculture mostly due to the rapid and eventual loss of its bodies of water as well as its vegetation and wildlife. It is land degradation on a severe level which could relatively spread out and expand to other areas, affecting more and more human population and ecosystems. |
Evolutionary biology sounds exciting - there wouldn't be any movies on the SyFy Channel without Gatoroids and Sharknados and other feats of life science run amok - but in reality you are going to spend a lot of time paying your dues watching sponges in mid-sneeze before you get to create an epidemic or a giant monster.
Sneezing sponges? Isn't that a little far-fetched, even for the network that brought us "Arachnoquake"? No, actually the sponge thing is real, and a new paper points to Porifera sneezing as evidence for a sensory organ in one of the most basic multicellular organisms on Earth, even though it doesn't even have a nervous system to interpret sensory information.
Sponges are pretty simplistic, at least in regards to how we see life. They feed by channeling water through their bodies. They have no digestive or circulatory systems, they live and die by water circulation alone. No noses, because there is no nervous system, so it would be pointless.
Sponges do sneeze, according to Sally Leys, Canada Research Chair in Evolutionary Developmental Biology at the University of Alberta. It takes about 45 minutes and involves an entire body-wide contraction. It's a sneeze because it reacts to stimuli like we do when we sneeze, in this case something like physical sediment.
Don't believe it? They took this video, one image every 30 seconds, after adding sediment to the water it is filtering.
Credit: Danielle Ludeman
The researchers used a variety of drugs to cause sneezing and then observed the process using fluorescent dye. They focused on the sponge’s osculum, which controls water exiting the organism, including when it sneezes. Their paper says they found that a cellular version of cilia, which function line antennae in other animals, play a role in triggering the sneezes. They concluded that the presence of ciliated cells in the osculum, combined with an apparent sensory function, means the osculum could be a sensory organ.
“For a sponge to have a sensory organ is totally new. This does not appear in a textbook; this doesn’t appear in someone’s concept of what sponges are permitted to have,” said Leys.
Cilia on the epithelia lining the osculum. a. The sponge Ephydatia muelleri in the lake, and grown in the lab viewed from the side (upper inset) and from above (lower inset). The oscula (white arrows) extend upwards from the body. b, c, Scanning electron micrographs show cilia arise from the middle of each cell along the entire length of the inside of the osculum; b the lining of the osculum with cilia on each cell (inset shows an osculum removed from the sponge and sliced in half longitudinally); c, two cilia arise from each cell. d, e, Cilia in the oscula labeled with antibodies to acetylated α-tubulin (green), nuclei with Hoechst (blue, n), actin with phalloidin (red). f. A 3D surface rendering illustrates how the cilia arise just above the nucleus of the cell. Scale bars a 5 mm; inset 1 mm; b 20 μm; inset 100 μm c, 1 μm d, 20 μm e, f 5 μm. Credit: doi:10.1186/1471-2148-14-3
The fun thing about evolutionary biology is that there are always a lot of new things to learn. This raises new questions about how sensory systems may have evolved. This could be unique and have evolved over 600 million years, or it could be evidence of a common evolutionary history.
“The sneeze can tell us a lot about how the sponge works and how it’s responding to the environment,” said lead author and evolutionary biology graduate student Danielle Ludeman in their statement. “This paper really gets at the question of how sensory systems evolved. The sponge doesn’t have a nervous system, so how can it respond to the environment with a sneeze the way another animal that does have a nervous system can?”
We look forward to the answer.
Citation: Danielle A Ludeman, Nathan Farrar, Ana Riesgo, Jordi Paps and Sally P Leys, 'Evolutionary origins of sensation in metazoans: functional evidence for a new sensory organ in sponges', BMC Evolutionary Biology 2014, 14:3 doi:10.1186/1471-2148-14-3 |
- Design is a creative problem-solving process and includes the study of both design practice and design theory. The design process involves problem identification, planning, research, innovation, conceptualisation, experimentation and critical reflection.
- This process typically results in new environments, systems, services and products, which may be unique or intended for mass production, or which may be constructed by hand or produced by mechanical and/or electronic means.
- Design adds value to life by creating products that have a purpose, that are functional and that have aesthetic value. Design products can shape the social, cultural and physical environment to the benefit of the nation.
- Most importantly, Design equips learners with crucial life skills such as visual literacy, critical and creative thinking, self-discipline, and leadership. It also encourages learners to be resourceful and entrepreneurial, to strategise and to be team players. |
Bees are at risk. That’s nothing most nature lovers haven’t heard before. However, it is a misconception to believe pest control requires damaging the fragile ecosystem of the bee. Nevertheless, it is also important to recognize just how damaging pesticides are to bees themselves. Finding a balance in managing pests and protecting (as well as encouraging) bees to flourish can be complex.
Why Bees Are Critically Important
Pollinators of all types help to ensure food is readily available to humans. Bees and other pollinators work to provide the pollination that crops, fruit trees, and other plants require. Without this, there simply is no way for food to grow.
Pollinators, which include wild bees, bats, birds, wasps, and butterflies, move from one plant to the next. The U.S. Environmental Protection Agency provides more insight into their importance but what is critical for individuals to recognize is that:
- We need a strong, healthy population of bees and other pollinators to help plants grow food.
- These pollinators are currently at risk and populations of them are declining.
What Factors Impact Bee Health?
The number of pollinators present in North America has been trending downward. In a report from the National Research Council called the Status of Pollinators in North America, we learn the honey bee, in particular, is at an increased risk. The report also looked into the causes of these declines. It found several key things were occurring, creating a high-risk scenario for pollinators of all types:
- Viruses, pests such as mites, and pathogens including bacterial diseases, were a key culprit to the decline.
- A lack of foraging habitat and a higher need to rely on supplemental diets, such as disposed of human food, also contribute.
- Genetic diversity is limited – ensuring it is quite difficult for bees to overcome these challenges.
- Pesticide exposure is creating a harmful world surrounding those delicate bees.
Why Pesticides Are Hurting Bees and Pollinators
In 2006, many beekeepers began reporting an unexpected and worrisome trend. Their bee colonies were failing. Many reported high losses of bees, with as much as 30 to 90 percent of beekeeper hives failing. And, of those that failed, about half showed symptoms of a condition called colony collapse disorder. When this occurs, the worker bees – which do most of the work to keep the hive operational – died. And what was even more puzzling is that the remaining queen and young brood had ample access to honey and to pollen reserves. But, without worker bees, the hive cannot maintain itself, and they fail.
Researchers found a key reason for this was pesticides. Farmers and others were using pesticides to protect their crops, often spraying with huge amounts of pesticides to keep them protected from damaging pests. This, along with other concerns, created a high risk to the bees, leading to their death.
Knowing the problem is, in part, pesticides, a significant amount of action has been taken to help protect pollinators from risks. Leading the way is the U.S. Department of Agriculture, which published the Colony Collapse Disorder Action Plan for farmers to use to improve bee populations – and minimize their death from the known causes.
What Can Be Done to Protect Bees?
What steps can be taken to protect pollinators? There’s much that must be done to protect bee colonies, and the good news is that, with a combined and consistent effort, it is possible. Some key steps to take to protect bees include the following, according to Richland Pest & Bee Control.
#1: Pesticide Selection Is Critical
When selecting a pesticide – whether for flowers, home gardens, or large crops – use those specifically designed to protect bees. Choose pesticides based on the types of plants you want to protect, but avoid harsh chemicals designed specifically to eliminate bees. There are two critical factors to look for:
- The pesticide must have a low toxicity rating.
- It has to have a little-to-no residual toxicity level.
#2: Read Labels Carefully
The OECD provides recommendations for using pesticides as well. Specifically, this website works to centralize all information about pesticide risks. A key part of it is labeling. Proper labeling of pesticides to ensure the user knows exactly what the immediate and long term risks are is critical. The EPA has new labeling guidelines for specifically harmful pesticides, such as neonicotinoids, which should not be used where bees are present.
#3: Ensure Treatment of Areas Is Done at the Right Time
Another tool for prevention is application timing. Research from Oregon State shows that applying pesticides can be safer when bees are not as active. This can be one way to protect pollinators because it allows for the use of pesticides safely. Ideally, the pesticides should be used one hour before the sun rises and at least one hour after it sets.
#4: Select a Pest Control Company Dedicated to Protecting Pollinators
Perhaps most importantly, consumers should select a pest control company – when using one – capable of recognizing the importance of bees and pollinators and who will take dedicated steps to protect them.
When choosing a provider, look for those who use safe but effective methods and who are both licensed and certified. It is also essential to choose a provider who is environmentally conscious, suggests the extermination experts at Richland. Not all pesticide companies view bees as a threat and many have the tools and resources to protect all pollinators while also ensuring safe communities for families.
Individuals must be focused on using safe pesticides and, when possible, avoiding disturbing colonies to ensure they continue to flourish. This will, by far, providing the highest level of protection for bees going forward.
For those who would like to learn more about safe pest control in Connecticut, turn to the experts at Richland Pest & Bee Control. |
The USGS Water Science School
Here, a U.S. Geological Survey (USGS) hydrographer is collecting a suspended-sediment water sample from the Little Colorado River, a kilometer upstream from the Colorado River, Grand Canyon, Arizona, USA. To gain knowledge of the suspended-sediment characteristics of the entire river (water quality can vary greatly across a river), suspended-sediment water samples have to be collected in multiple cross-section intervals (notice the string going horizontally across the picture, which allows the hydrographer to sample in a straight line across the river). Suspended-sediment concentrations vary horizontally across the river and also vary vertically with depth, so the sampler must also sample vertically by moving the sample bottle up and down at a constant speed, being careful not to hit the stream bed, which could cause bottom sediment to rise into the water column.
The hydrographer carries numerous glass bottles, using one bottle for each cross-sectional location. The bottle is secured in the metal sampler and there is a tube in the front of the sampler to allow water to enter the bottle at a controlled rate, while letting out air from inside the bottle. The very brown water here indicates the presence of a lot of fine dirt particles and the turbidity of this water is very high.
Back to: Sediment | Turbidity | Impervious surfaces |
The domestic water buffalo (Bubalus bubalis) contributes a significant share of global milk production and is the major milk producing animal in several countries. Buffaloes are kept mostly by small-scale producers in developing countries, who raise one or two animals in mixed crop–livestock systems. Water buffaloes are classified into two subspecies: the river buffalo and the swamp buffalo. River buffaloes constitute approximately 70 percent of the world water buffalo population. River buffalo milk accounts for a substantial share of total milk production in India and Pakistan and is also important in the Near East. Swamp buffaloes are smaller and have lower milk yields than river buffaloes. They are present mainly in Eastern Asia and are primarily raised for draught power.
River buffaloes usually produce between 1 500 and 4 500 litres of milk per lactation. They have a significantly longer productive life than cattle, providing calves and milk until they are up to 20 years of age. The many factors that constrain commercial buffalo milk production include animals’ late age at first calving, the seasonality of oestrus, and the long calving interval and dry period.
In recent decades, breeding programmes – especially in Bulgaria, China, Egypt, India and Pakistan – have attempted to improve the milk yield of river buffalo. Well-known specialized dairy buffalo breeds include Murrah, Nili-Ravi, Kundi, Surti, Jaffarabadi, Bhadawari and Mehsana. |
Hormones in the human body are delicately balanced by the actions of different glands. The master gland of the human body is the pituitary gland in the base of the brain, and the hormones it releases control the function of other glands throughout the body (hence the term "master gland"). The pituitary gland releases a hormone called ACTH which stimulates the adrenal glands, located atop each kidney, to produce cortisol. Cortisol helps the body deal with stress by regulating immune function, blood glucose (sugar) levels and blood pressure. An imbalance in the production of cortisol may lead to several conditions, each with their own set of signs and symptoms.
If cortisol is too high, it will trigger the release of higher-than-normal levels of glucose into the blood stream. This increased blood glucose causes symptoms similar to those seen in people with type 2 diabetes, including fatigue. This is because the glucose in the blood does not necessarily mean that it is available as fuel for the muscles and other organs.
Cortisol decreases the function of the immune system as a response to stress. In fact, a cortisol-like medication called prednisone is used to treat autoimmune disorders like rheumatism. People with excess levels of cortisol in their body will have immune systems that have virtually shut down, leading to frequent infections, especially bacterial infections.
Dizziness and Fainting
Low levels of cortisol lead to a decrease in blood pressure. When the body's blood pressure goes down below normal levels, the blood is not able to reach the brain as efficiently, leading to a sense of dizziness and maybe even fainting. |
A tsunami is a powerful wave, usually created by a large-scale motion of the ocean floor. Although they are almost imperceptible at sea, tsunami waves increase in height as they reach a coastline and are capable of causing great destruction. The term "tsunami" is taken from the Japanese words for "harbor" and "wave."
In the 1990s, eighty-two tsunamis were reported worldwide, taking more than four thousand lives and causing hundreds of millions of dollars in damage. Most tsunamis occur in seismically active regions such as the Pacific Ocean, but tsunamis can occur anywhere in the world where there are large bodies of water.
A tsunami can be caused by any disturbance that moves a large amount of water. The vast majority of tsunamis originate during undersea earthquakes when water is moved by the uplift or subsidence of hundreds of square kilometers of the sea floor. Landslides (which often accompany large earthquakes), volcanic eruptions and collapses, and explosions and meteor impacts can also disturb enough water to generate a tsunami.
The wind-generated waves usually seen breaking on the beach arrive every 10 to 15 seconds and have wave crests tens of meters apart. In contrast, tsunamis can have crests that are more than 20 minutes and hundreds of kilometers apart (see figure).
Most tsunamis are classified as long waves—that is, waves with long wavelengths relative to their water depth. They travel with speeds proportional to the square root of the water depth. In the deep ocean, their speed can be similar to that of a jet plane, as high as 700 kilometers per hour. Closer to shore, in shallow water, they slow down appreciably. At sea, the height of a tsunami wave is not usually distinguishable from the surrounding wind waves without sensitive measuring equipment because the wave often is only 1 to 2 meters high and hundreds of kilometers long.
Tsunamis generated by earthquake movement of the seabed can travel thousands of miles across the ocean without losing their energy. For example, in 1960 a tsunami generated in Chile, South America caused substantial damage nearly 14,500 kilometers (9,000 miles) away in Japan. Hawaii, in the middle of the Pacific Ocean, is particularly susceptible to tsunamis that travel across the ocean.
Unlike earthquake-caused waves, tsunamis generated by mechanisms like landslides and eruptions dissipate quickly and rarely affect coastlines far away from the source. Their local effects, however, can sometimes be just as damaging: in 1883, the tsunami caused by the eruption of the volcano Krakatau killed more than 36,000 people on the nearby islands of Java and Sumatra.
As tsunami waves approach the coastline, a large change takes place in their shape. Since the landward portion of the wave is in shallower water than the seaward portion, it travels relatively slower. This allows rear
When a tsunami reaches the shore, the impact can destroy buildings and other coastal structures. The flowing water can move boats, vehicles and debris. Further destruction results when these objects collide like battering rams with anything in their path. Gas lines broken during the tsunami often cause fires that increase the tsunami damage. Tsunamis can flood low-lying areas, destroying crops with salt water and leaving behind sand and boulders.
Efforts to protect people from tsunamis center on proper preparation of tsunami-prone areas. Many lives have been saved when residents of coastal communities were aware that earthquake shaking was a signal to evacuate to high ground.
Although certain tsunamis, such as those generated by landslides, arrive without warning, tsunami researchers are focusing on better predicting these locally destructive waves as well as the transoceanic ones. The tools that researchers use include seismic stations, deep-ocean pressure gauges, and physical and numerical models. Field surveys of recent tsunamis and geological investigations of ancient waves also help scientists and hazards planners design structures and plan communities so that casualties and damage can be reduced.
Catherine M. Petroff
Folger, T. "Killer Waves, the Struggle to Predict Tsunamis." Discover Magazine May 1994, 66–73.
Gonzalez, F. "Tsunami, Predicting Destruction by Monster Waves." Scientific American May 1999, 56–65.
McCredie, S. "Tsunamis, the Waves that Kill." Smithsonian Magazine March 1994, 28–39.
Tsunami! University of Washington. <http://www.geophys.washington.edu/tsunami> .
USC Tsunami Research Group. University of Southern California. <http://www.usc.edu/dept/tsunamis> .
Tsunami Research Program. National Oceanic and Atmospheric Association. <http://www.pmel.noaa.gov/tsunami> .
At one time, tsunamis were called tidal waves for the way that the water flowed on and offshore like a quickly rising and falling tide. But because tsunamis are not caused by the gravitational pull of the Moon and Sun, the term tidal wave is no longer used.
A 7.8-magnitude earthquake in the Sea of Japan caused waves 5 to 10 meters high that swept up buildings and vehicles on the island of Okushiri. Although 239 people died from the Okushiri tsunami, many residents saved themselves by fleeing to high ground immediately after the earthquake. |
The Gambia is the smallest of the mainland African countries and one of the poorest nations in the world. Vector-borne diseases such as malaria and dengue are endemic here, as are water-borne diseases such as hepatitis A and typhoid fever. Acute respiratory infections, including pneumonia; diarrheal diseases, and parasitic worm infections are also major causes of death in adults and children.
The largest NIAID investment in The Gambia focuses on malaria, particularly studies of severe malaria in children and the evaluation of microbial larvicides as a potential method of mosquito control. NIAID also funds projects on HIV/AIDS and Helicobacter pylori, a bacterium that infects the stomach and can lead to ulcers and cancer.
The World Health Organization estimates that over 1.6 million people—including more than 800,000 children under five—die every year from pneumococcal infections. The Gambia Pneumococcal Vaccine Trial was the first major randomized, controlled vaccine clinical trial in nearly 20 years to show a statistically significant reduction in overall child mortality. Findings indicate that vaccinating infants against the bacterium Streptococcus pneumoniae could substantially reduce death and illness among children in developing countries, including in rural areas with limited access to public health systems.
Learn more about the Gambia Pneumococcal Vaccine Trial.
Researchers supported by NIAID have produced a draft genome sequence of Aedes aegypti, the mosquito species commonly associated with the transmission of dengue, yellow fever, and chikungunya. These viral diseases have recently reemerged in endemic areas as efforts to control Ae. aegypti populations are being challenged by growing resistance to insecticides.
The researchers hope that studies of the new genome sequence, which is five times larger than that of the malaria-transmitting Anopheles gambiae mosquito, will not only enhance research on Ae. Aegypti but also lead to new ways of genetically altering the species so it cannot acquire or spread disease.
Learn more about the Ae. Aegypti genome.
back to top
Last Updated October 15, 2012 |
Historic Sites Act
The Historic Sites Act of 1935 was enacted by the United States Congress largely to organize the myriad federally own parks, monuments, and historic sites under the National Park Service and the United States Secretary of the Interior. However, it is also significant in that it declared for the first time "...that it is a national policy to preserve for public use historic sites, buildings, and objects of national significance...". Thus it is the first assertion of historic preservation as a government duty, which was only hinted at in the 1906 Antiquities Act.
Section 462 of the act enumerates a wide range of powers and responsibilities given to the National Park Service and the Secretary of the Interior, including:
- codification and institutionalization of the temporary Historic American Buildings Survey
- authorization to survey and note significant sites and buildings (this became National Historic Landmark program, which was integrated into the National Register after the 1966 National Historic Preservation Act)
- authorization to actually perform preservation work
Section 463 established the National Park System Advisory Board to assist the Secretary of the Interior with administration.
- Historic Sites Act of 1935, 49 Stat. 666; 16 U.S.C. sections 461-467. |
Metaethics is a branch of analytic philosophy that explores the status, foundations, and scope of moral values, properties, and words. Whereas the fields of applied ethics and normative theory focus on what is moral, metaethics focuses on what morality itself is. Just as two people may disagree about the ethics of, for example, physician-assisted suicide, while nonetheless agreeing at the more abstract level of a general normative theory such as Utilitarianism, so too may people who disagree at the level of a general normative theory nonetheless agree about the fundamental existence and status of morality itself, or vice versa. In this way, metaethics may be thought of as a highly abstract way of thinking philosophically about morality. For this reason, metaethics is also occasionally referred to as “second-order” moral theorizing, to distinguish it from the “first-order” level of normative theory.
Metaethical positions may be divided according to how they respond to questions such as the following:
- What exactly are people doing when they use moral words such as “good” and “right”?
- What precisely is a moral value in the first place, and are such values similar to other familiar sorts of entities, such as objects and properties?
- Where do moral values come from—what is their source and foundation?
- Are some things morally right or wrong for all people at all times, or does morality instead vary from person to person, context to context, or culture to culture?
Metaethical positions respond to such questions by examining the semantics of moral discourse, the ontology of moral properties, the significance of anthropological disagreement about moral values and practices, the psychology of how morality affects us as embodied human agents, and the epistemology of how we come to know moral values. The sections below consider these different aspects of metaethics.
Table of Contents
- History of Metaethics
- The Normative Relevance of Metaethics
- Semantic Issues in Metaethics
- Ontological Issues in Metaethics
- Psychology and Metaethics
- Epistemological Issues in Metaethics
- Anthropological Considerations
- Political Implications of Metaethics
- References and Further Reading
Although the word “metaethics” (more commonly “meta-ethics” among British and Australian philosophers) was coined in the early part of the twentieth century, the basic philosophical concern regarding the status and foundations of moral language, properties, and judgments goes back to the very beginnings of philosophy. Several characters in Plato’s dialogues, for instance, arguably represent metaethical stances familiar to philosophers today: Callicles in Plato’s Gorgias (482c-486d) advances the thesis that Nature does not recognize moral distinctions, and that such distinctions are solely constructions of human convention; and Thrasymachus in Plato’s Republic (336b-354c) advocates a type of metaethical nihilism by defending the view that justice is nothing above and beyond whatever the strong say that it is. Socrates’ defense of the separation of divine commands from moral values in Plato’s Euthyphro (10c-12e) is also a forerunner of modern metaethical debates regarding the secular foundation of moral values. Aristotle’s grounding of virtue and happiness in the biological and political nature of humans (in Book One of his Nicomachean Ethics) has also been examined from the perspective of contemporary metaethics (compare, MacIntyre 1984; Heinaman 1995). In the classical Chinese tradition, early Daoist thinkers such as Zhuangzi have also been interpreted as weighing in on metaethical issues by critiquing the apparent inadequacy and conventionality of human attempts to reify moral concepts and terms (compare, Kjellberg & Ivanhoe 1996). Many Medieval accounts of morality that ground values in religious texts, commands, or emulation may also be understood as defending certain metaethical positions (see Divine Command Theory). In contrast, during the European Enlightenment, Immanuel Kant sought a foundation for ethics that was less prone to religious sectarian differences, by looking to what he believed to be universal capacities and requirements of human reason. In particular, Kant’s discussions in his Groundwork on the Metaphysics of Morals of a universal “moral law” necessitated by reason have been fertile ground for the articulation of many contemporary neo-Kantian defenses of moral objectivity (for example, Gewirth 1977; Boylan 2004).
Since metaethics is the study of the foundations, if any, of morality, it has flourished especially during historical periods of cultural diversity and flux. For example, responding to the cross-cultural contact engendered by the Greco-Persian Wars, the ancient Greek historian Herodotus reflected on the apparent challenge to cultural superiority posed by the fact that different cultures have seemingly divergent moral practices. A comparable interest in metaethics dominated seventeenth and eighteenth-century moral discourse in Western Europe, as theorists struggled to respond to the destabilization of traditional symbols of authority—for example, scientific revolutions, religious fragmentation, civil wars—and the grim pictures of human egoism that thinkers such as John Mandeville and Thomas Hobbes were presenting (compare, Stephen 1947). Most famously, the eighteenth-century Scottish philosopher David Hume may be understood as a forerunner of contemporary metaethics when he questioned the extent to which moral judgments might ultimately rest on human passions rather than reason, and whether certain virtues are ultimately natural or artificial (compare, Darwall 1995).
Analytic metaethics in its modern form, however, is generally recognized as beginning with the moral writings of G.E. Moore. (Although, see Hurka 2003 for an argument that Moore’s innovations must be contextualized by reference to the preceding thought of Henry Sidgwick.) In his groundbreaking Principia Ethica (1903), Moore urged a distinction between merely theorizing about moral goods on the one hand, versus theorizing about the very concept of “good” itself. (Moore’s specific metaethical views are considered in more detail in the sections below.) Following Moore, analytic moral philosophy became focused almost exclusively on metaethical questions for the next few decades, as ethicists debated whether or not moral language describes facts and whether or not moral properties can be scientifically or “naturalistically” analyzed. (See below for a more specific description of these different metaethical trends.) Then, in the 1970s, largely inspired by the work of philosophers such as John Rawls and Peter Singer, analytic moral philosophy began to refocus on questions of applied ethics and normative theories. Today, metaethics remains a thriving branch of moral philosophy and contemporary metaethicists frequently adopt an interdisciplinary approach to the study of moral values, drawing on disciplines as diverse as social psychology, cultural anthropology, comparative politics, as well as other fields within philosophy itself, such as metaphysics, epistemology, action theory, and the philosophy of science.
Since philosophical ethics is often conceived of as a practical branch of philosophy—aiming at providing concrete moral guidance and justifications—metaethics sits awkwardly as a largely abstract enterprise that says little or nothing about real-life moral issues. Indeed, the pressing nature of such issues was part of the general migration back to applied and normative ethics in the politically-galvanized intellectual climate of the 1970s (described above). And yet, moral experience seems to furnish myriad examples of disagreement concerning not merely specific applied issues, or even the interpretations or applications of particular theories, but sometimes about the very place of morality in general within multicultural, secular, and scientific accounts of the world. Thus, one of the issues inherent in metaethics concerns its status vis-à-vis other levels of moral philosophizing.
As a historical fact, metaethical positions have been combined with a variety of first-order moral positions, and vice versa: George Berkeley, John Stuart Mill, G.E. Moore, and R.M. Hare, for instance, were all committed to some form of Utilitarianism as a first-order moral framework, despite advocating radically different metaethical positions. Likewise, in his influential book Ethics: Inventing Right and Wrong, J.L. Mackie (1977) defends a form of (second-order) metaethical skepticism or relativism in the first chapter, only to devote the rest of the book to the articulation of a substantive theory of (first-order) Utilitarianism. Metaethical positions would appear then to underdetermine normative theories, perhaps in the same way that normative theories themselves underdetermine applied ethical stances (for example, two equally committed Utilitarians can nonetheless disagree about the moral permissibility of eating meat). Yet, despite the logically possible combinations of second and first-order moral positions, Stephen Darwall (2006: 25) notes that, nevertheless, “there do seem to be affinities between metaethical and roughly corresponding ethical theories,” for example, metaethical naturalists have almost universally tended to be Utilitarians at the first-order level, though not vice versa. Notable exceptions to this tendency—that is, metaethical naturalists who are also first-order deontologists—include Alan Gewirth (1977) and Michael Boylan (1999; 2004). For critical responses to these positions, see Beyleveld (1992), Steigleder (1999), Spence (2006), and Gordon (2009).
Other philosophers envision the connection between metaethics and more concrete moral theorizing in much more intimate ways. For example, Matthew Kramer (2009: 2) has argued that metaethical realism (see section four below) is itself actually a first-order moral view as well, noting that “most of the reasons for insisting on the objectivity of ethics are ethical reasons.” (For a similar view about the first-order “need” to believe in the second-order thesis that moral values are “objective,” see also Ronald Dworkin 1996.) Torbjörn Tännsjö (1990), by contrast, argues that, although metaethics is irrelevant to normative theorizing, it may still be significant in other psychological or pragmatic way, for example, by constraining other beliefs. Nicholas Sturgeon (1986) has claimed that the first-order belief in moral fallibility must be grounded in some second-order metaethical view. And David Wiggins (1976) has suggested that metaethical questions about the ultimate foundation and justification of basic moral beliefs may have deep existential implications for how humans view the question of the meaning of life.
The metaethical question of whether or not moral values are cross-culturally universal would seem to have important implications for how foreign practices are morally evaluated at the first-order level. In particular, metaethical relativism (the view that there are no universal or objective moral values) has been viewed as highly loaded politically and psychologically. Proponents of such relativism often appeal to the alleged open-mindedness and tolerance about first-order moral differences that their second-order metaethical view would seem to support. Conversely, opponents of relativism often appeal to what Thomas Scanlon (1995) has called a “fear of relativism,” citing an anxiety about the first-order effects on our moral convictions and motivations if we become too morally tolerant. (See sections five and eight below for a more detailed discussion of the psychological and political dimensions of metaethics, respectively.) Russ Shafer-Landau (2004) further draws attention to the first-order rhetorical uses of metaethics, for example, Rudolph Giuliani’s evocation of the dangers of metaethical relativism following the terrorist events in the United States on September 11, 2001.
One of the central debates within analytic metaethics concerns the semantics of what is actually going on when people make moral statements such as “Abortion is morally wrong” or “Going to war is never morally justified.” The metaethical question is not necessarily whether such statements themselves are true or false, but whether they are even the sort of sentences that are capable of being true or false in the first place (that is, whether such sentences are “truth-apt”) and, if they are, what it is that makes them “true.” On the surface, such sentences would appear to possess descriptive content—that is, they seem to have the syntactical structure of describing facts in the world—in the same form that the sentence “The cat is on the mat” seems to be making a descriptive claim about a cat on a mat; which, in turn, is true or false depending on whether or not there really is a cat on the mat. To put it differently, the sentence “The cat is on the mat” seems to be expressing a belief about the way the world actually is. The metaethical view that moral statements similarly express truth-apt beliefs about the world is known as cognitivism. Cognitivism would seem to be the default view of our moral discourse given the apparent structure that such discourse appears to have. Indeed, if cognitivism were not true— such that moral sentences were expressing something other than truth-apt propositions—then it would seem to be difficult to account for why we nonetheless are able to make logical inferences from one moral sentence to another. For instance, consider the following argument:
1. It is wrong to lie.
2. If it is wrong to lie, then it is wrong to get one’s sibling to lie.
3. Therefore, it is wrong to get one’s sibling to lie.
This argument seems to be a valid application of the logical rule known as modus ponens. Yet, logical rules such as modus ponens operate only on truth-apt propositions. Thus, because we seem to be able to legitimately apply such a rule in the example above, such moral sentences must be truth-apt. This argument in favor of metaethical cognitivism by appeal to the apparent logical structure of moral discourse is known as the Frege-Geach Problem in honor of the philosophers credited with its articulation (compare, Geach 1960; Geach 1965 credits Frege as an ancestor of this problem; see also Schueler 1988 for an influential analysis of this problem vis-à-vis moral realism). According to proponents of the Frege-Geach Problem, rejecting cognitivism would force us to show the separate occurrences of the sentence “it is wrong to lie” in the above argument as homonymous: according to such non-cognitivists, the occurrence in sentence (1) is an expression of a non-truth-apt sentiment about lying, whereas the occurrence in sentence (2) is not, since it’s only claiming what one would express conditionally. Since this homonymy would seem to threaten to undermine the grammatical structure of moral discourse, non-cognitivism must be rejected.
Despite this argument about the surface appearance of cognitivism, however, numerous metaethicists have rejected the view that moral sentences ultimately express beliefs about the world. A historically influential forerunner of the alternate theory of non-cognitivism can be found in the moral writings of David Hume, who famously argued that moral distinctions are not derived from reason, but instead represent emotional responses. As such, moral sentences express not beliefs which may be true or false, but desires or feelings which are neither true nor false. This Humean position was renewed in twentieth-century metaethics by the observation that not only are moral disputes often heavily affect-laden in a way many other factual disputes are not, but also that the kind of facts which would apparently be necessary to accommodate true moral beliefs would have to be very strange sorts of entities. Specifically, the worry is that, whereas we can appeal to standards of empirical verification or falsification to adjudicate when our non-moral beliefs are true or false, no such standards seem applicable in the moral sphere, since we cannot literally point to moral goodness in the way we can literally point to cats on mats.
In response to this apparent disanalogy between moral and non-moral statements, many metaethicists embraced a sort of neo-Humean non-cognitivism, according to which moral statements express non-truth-apt desires or feelings. The Logical Positivism of the Vienna Circle adopted this metaethical position, finding anything not empirically verifiable to be semantically “meaningless.” Thus, A.J. Ayer (1936) defended what he called metaethical emotivism, according to which moral expressions are indexed always to the speaker’s own affective state. So, the moral utterance “Abortion is morally wrong” would ultimately mean only that “I do not approve of abortion,” or, more accurately (to avoid even the appearance of having descriptive content), “Abortion—boo!” C.L. Stevenson (1944) further developed metaethical non-cognitivism as involving not merely an expression of the speaker’s personal attitude, but also an implicit endorsement of what the speaker thinks the audience ought to feel. R.M. Hare (1982) similarly analyzed moral utterances as containing both descriptive (truth-apt) as well as ineliminably prescriptive elements, such that genuinely asserting, for instance, that murder is wrong involves a concomitant emotional endorsement of not murdering. Drawing on the work of ordinary-language philosophers such as J.L. Austin, Hare distinguished the act of making a statement (that is, the statement’s “illocutionary force”) from other acts that may be performed concomitantly (that is, the statement’s “perlocutionary force”)— as when, for example, stating “I do” in the context of a marriage ceremony thereby effects an actual legal reality. Similarly, Hare argued that in the case of moral language, the illocutionary act of describing a war as “unjust” may, as part and parcel of the description itself, also involve the perlocutionary force of recommending a negative attitude or action with respect to that war. For Hare, the prescriptive dimension of such an assertion must be constrained by the requirements of universalizability—hence, Hare’s metaethical position is referred to as “universal prescriptivism.”
More recently, sophisticated versions of non-cognitivism have flourished that build into moral expression not only the individual speaker’s normative endorsement, but also an appeal to a socially-shared norm that helps contextualize the endorsement. Thus, Alan Gibbard (1990) defends norm-expressivism, according to which moral statements express commitments not to idiosyncratic personal feelings, but instead to the particular (and, for Gibbard, evolutionarily adaptive) cultural mores that enable communication and social coordination.
Non-cognitivists have also attempted to address the Frege-Geach Problem discussed above, by specifying how the expression of attitudes functions in moral discourse. Simon Blackburn (1984), for instance, has famously argued that non-cognitivism is a claim only about the moral, not the logical parts of discourse. Thus, according to Blackburn, to say that “If it is wrong to lie, then it is wrong to get one’s sibling to lie” can be understood as expressing not an attitude toward lying itself (which is couched in merely hypothetical terms), but rather an attitude toward the disposition to express an attitude toward lying (that is, a kind of second-order sentiment). Since this still essentially involves the expression of attitudes rather than truth-apt assertions, it’s still properly a type of non-cognitivism; yet, by distinguishing expressing an attitude directly from expressing an attitude about another (hypothetical) attitude, Blackburn thinks the logical and grammatical structure of our discourse is preserved. Since this view combines the expressive thesis of non-cognitivism with the logical appearance of moral realism, Blackburn dubs it “quasi-realism”. For a critical response to Blackburn’s attempted solution to the Frege-Geach Problem, see Wright (1988). For an accessible survey of the history of the debate surrounding the Frege-Geach Problem, see Schroeder (2008), and for attempts to articulate new hybrid theories that combine elements of both cognitivism as well as non-cognitivism, see Ridge (2006) and Boisvert (2008).
One complication in the ongoing debate between cognitivist versus non-cognitivist accounts of moral language is the growing realization of the difficulty in conceptually distinguishing beliefs from desires in the first place. Recognition of the mingled nature of cognitive and non-cognitive states can arguably be found in Aristotle’s view that how we perceive and conceptualize a situation fundamentally affects how we respond to it emotionally; not to mention Sigmund Freud’s commitment to the idea that our emotions themselves stem ultimately from (perhaps unconscious) beliefs (compare, Neu 2000). Much contemporary metaethical debate between cognitivists and non-cognitivists thus concerns the extent to which beliefs alone, desires alone, or some compound of the two—what J.E.J. Altham (1986) has dubbed “besires”—are capable of capturing the prescriptive and affective dimension that moral discourse seems to evidence (see Theories of Emotions).
A related issue regarding the semantics of metaethics concerns what it would even mean to say that a moral statement is “true” if some form of cognitivism were correct. The traditional philosophical account of truth (called the correspondence theory of truth) regards a proposition as true just in case it accurately describes the way the world really is independent of the proposition. Thus, the sentence “The cat is on the mat” would be true if and only if there really is a cat who is really on a mat. According to this understanding, moral expressions would similarly have to correspond to external features about the world in order to be true: the sentence “Murder is wrong” would be true in virtue of its correspondence to some “fact” in the world about murder being wrong. And indeed, several metaethical positions (often grouped under the title of “realism” or “objectivism”—see section four below) embrace precisely this view; although exactly what the features of the world are to which allegedly true moral propositions correspond remains a matter of serious debate. However, there are several obvious challenges to this traditional correspondence account of moral truth. For one thing, moral properties such as “wrongness” do not seem to be the sort of entities that can literally be pointed to or picked out by propositions in the same way that cats and mats can be, since the moral properties are not spatial-temporal objects. As David Hume famously put it,
Take any action allow’d to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In which-ever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. (Hume 1740: 468)
Other possible ontological models for what moral “facts” might look like are considered in section four below. In later years, however, several alternative philosophical understandings of truth have proliferated which might allow moral expressions to be “true” without requiring any correspondence to external facts per se. Many of these new theories of moral truth hearken to a suggestion by Ludwig Wittgenstein in the early twentieth-century that the meaning of any term is determined by how that term is actually used in discourse. Building on this insight about meaning, Frank Ramsey (1927) extended the account to truth itself. Thus, according to Ramsey, the predicate “is true” does not stand for a property per se, but rather functions as a kind of abbreviation for the indirect assertion of other propositions. For instance, Ramsey suggested that to utter the proposition “The cat is on the mat” is to say the same thing as “The sentence ‘the cat is on the mat’ is true.” The phrase “is true” in the latter utterance adds nothing semantically to what is expressed in the former, since in uttering the former, the speaker is already affirming that the cat is on the mat. This is an instance of the so-called disquotational schema, that is, the view that truth is already implicit in a sentence without the addition of the phrase “is true.” Ramsey wielded this principle to defend a deflationary theory of truth, wherein truth predicates are stripped of any metaphysically substantial property, and reduced instead merely to the ability to be formally represented in a language. Saying that truth is thus stripped of metaphysics is not to say that it is determined by usage in an arbitrary or unprincipled way. This is because, while the deflationary theory defines “truth” merely as the ability to be represented in a language, there are always syntactic rules that a language must follow. The grammar of a language thus constrains what can be properly expressed in that language, and therefore (on the deflationary theory) what can be true. Deflationary truth is in this way constrained by what may be called “warranted assertibility,” and since deflationary truth just is what can be expressed by the grammar of a language, we can say more strongly that truth is warranted assertibility.
Hilary Putnam (1981) has articulated an influential challenge to the deflationary account. He argues that deflationary truth is unable to accommodate the fact that we normally think of truth as eternal and stable. But if truth just is warranted assertibility (or what Putnam calls “rational acceptability”), then it becomes mutable since warranted assertibility varies depending on what information is available. For instance, the proposition “the Earth is flat” could have been asserted with warrant (that is, accepted rationally) a thousand years ago in a way that it could not be today because we now have more information available about the Earth. But, though warranted assertibility changed in this case, we wouldn’t want to say that the truth of the proposition “the Earth is flat” changed. Based on these problems, philosophers like Putnam refine the deflationary theory by substituting a condition of ideal warrant or justification, that is, where warranted assertibility is not relative to what specific information a speaker may have at a specific moment, but to what information would be accessible to an ideal epistemic agent. What kind of information would such an ideal speaker have? Putnam characterizes the ideal epistemic situation as involving information that is both complete (that is, involving everything relevant) and consistent (that is, not logically contradictory). These two conditions combine to affect a convergence of information for the ideal agent— a view Putnam calls “internal realism.”
This tradition of deflating truth—of what Jamie Dreier has described as “sucking the substance out of heavy-duty metaphysical concepts” (Dreier 2004: 26)—has received careful exposition in recent years by Crispin Wright. Wright (1992) defends a theory of truth he calls “minimalism.” Though indebted in fundamental ways to the tradition—from Wittgenstein to Ramsey to Putnam—discussed above, Wright’s position differs importantly from these accounts. Wright agrees with Putnam’s criticism of traditional deflationary theories of truth, namely that they make truth too variable by identifying it with something as mutable as warranted assertibility. However, Wright disagrees with Putnam that truth is constrained by the convergence of information that would be available to an epistemically ideal agent. This is because Wright thinks that it is apparent to speakers of a language that something may be true even if it is not justified in ideal epistemic conditions. Wright calls this apparentness a “platitude.” Platitudes, says Wright, are what ordinary language users pre-theoretically mean, and Wright identifies several specific platitudes we have concerning truth, for example, that a statement can be true without being justified, that truth-apt propositions have negations that are also thereby truth-apt, and so forth. Such platitudes serve the same purpose of checking and balancing truth that warranted assertibility or ideal convergence served in the theories of Ramsey and Putnam (Wright calls this check and balance “superassertability”). As Wright puts it, “If an interpretation of “true” satisfies these platitudes, there is, for minimalism, no further, metaphysical question whether it captures a concept worth regarding as truth” (1992: 34). Wright’s theory of minimalist truth has been extraordinarily influential in metaethics, particularly by non-cognitivists eager to accommodate some of the logical structure that moral discourse apparently evidences, but without viewing moral utterances as expressing beliefs that must literally correspond to facts. Such a non-cognitivist theory of minimalist moral truth is defended by Simon Blackburn (1993), who characterizes the resultant view as “quasi-realism” (as discussed in section 3a above). For a critical discussion of the extent to which non-cognitivist views such as Blackburn’s quasi-realism can leverage Wright’s theory of minimalism, see the debate between Michael Smith (1994) and John Divers and Alexander Miller (1994).
If moral truth is understood in the traditional sense of corresponding to reality, what sort of features of reality could suffice to accommodate this correspondence? What sort of entity is “wrongness” or “goodness” in the first place? The branch of philosophy that deals with the way in which things exist is called “ontology”, and metaethical positions may also be divided according to how they envision the ontological status of moral values. Perhaps the biggest schism within metaethics is between those who claim that there are moral facts that are “real” or “objective” in the sense that they exist independently of any beliefs or evidence about them, versus those who think that moral values are not belief-independent “facts” at all, but are instead created by individuals or cultures in sometimes radically different ways. Proponents of the former view are called realists or objectivists; proponents of the latter view are called relativists or subjectivists.
Realism / objectivism is often defended by appeal to the normative or political implications of believing that there are universal moral truths that transcend what any individual or even an entire culture might think about them (see sections two and eight). Realist positions, however, disagree about what precisely moral values are if they are causally independent from human belief or culture. According to some realists, moral values are abstract properties that are “objective” in the same sense that geometrical or mathematical properties might be thought to be objective. For example, it might be thought that the sentence “Dogs are canines” is true in a way that is independent from what humans think about it, without thereby believing that there is a literal, physical thing called “dogs”— for, dogs-in-general (rather than a particular dog, say, Fido) is an abstract concept. Some moral realists envision moral values as real without being physical in precisely this way; and because of the similarity between this view and Plato’s famous Theory of Forms, such moral realists are also sometimes called moral Platonists. According to such realists, moral values are real without being reducible to any other kinds of properties or facts: moral values instead, according to these realists, are ontologically unique (or sui generis) and irreducible to other kinds of properties. Proponents of this type of Platonist or sui generis version of moral realism include G.E. Moore (1903), W.D. Ross (1930), W.D. Hudson (1967), Iris Murdoch (1970, arguably), and Russ Shafer-Landau (2003). Tom Regan (1986) also discusses the effect of this metaethical position on the general intellectual climate of the fin de siècle movement known as the Bloomsbury Group.
Other moral realists, though, conceive of the ontology of moral properties in much more concrete terms. According to these realists, moral properties such as “goodness” are not purely abstract entities, but are always instead realized and embodied in particular physical states of affairs. These moral realists often draw analogies between moral properties and scientific properties such as gravity, velocity, mass, and so forth. These scientific concepts are commonly thought to exist independent of what we think about them, and yet they are not part of an ontologically distinct world of pure, abstract ideas in the way that Plato envisioned. So too might moral properties ultimately be reducible to scientific features of the world in a way that preserves their objectivity. An early proponent of such a naturalistic view is arguably Aristotle himself, who anchored his ethics to an understanding of what biologically makes human life flourish. For a later Aristotelian moral realism, see Paul Bloomfield (2001). However, for questions about the extent to which Aristotelianism can truly pair with moral realism, see Robert Heinaman (1995). Note also that several other metaethicists who share broadly Aristotelian conceptions of human needs and human flourishing nonetheless reject realism, arguing that even a shared human nature still essentially locates moral values in human sensibility rather than in some trans-human moral reality. For examples of such naturalistic moral relativism, see Philippa Foot (2001) and David B. Wong (2006). Similar claims about the ineliminable roles that human sensibility and language play in constituting moral reality have looked less to Aristotle and more to Wittgenstein; although, as with the former, there may be some discomfort allowing views that closely link morality with human sensibilities to be called genuinely “realist.” For examples, see in particular David Wiggins (1976) and Sabina Lovibond (1983). Other notable theorists who have advanced Wittgensteinian accounts of the constitutive role that language and context play in our understanding of morality include G.E.M. Anscombe (1958) and Alasdair MacIntyre (1981), although both are explicitly agnostic about whether this commits them to moral realism or relativism.
The naturalistic tradition of moral realism is continued by contemporary theorists such as Alan Gewirth (1980), Deryck Beyleveld (1992), and Michael Boylan (2004) who similarly seek to ground moral objectivity in certain universal features of humans. Unlike Aristotelian appeals to our biological and social nature, however, these theorists adopt a Kantian stance, which appeals to the capacities and requirements of rational agency—for example, what Gewirth has called “the principle of generic consistency.” While these neo-Kantian theories are more focused on questions about the justification of moral beliefs rather than on the existence of belief-independent values or properties, they may nonetheless be classed as moral realisms in light of their commitment to the objective and universal nature of rationality. For commentary and discussion of such theories, see in particular Steigleder (1999), Boylan (1999), Spence (2006), and Gordon (2009).
Other naturalistic theories have looked to scientific models of property reductionism as a way of understanding moral realism. In the same way that, for instance, our commonsense understanding of “water” refers to a property that, at the scientific level, just is H2O, so too might moral values be reduced to non-moral properties. And, since these non-moral properties are real entities, the resultant view about the values that reduce to them can be considered a form of moral realism—without any need to posit trans-scientific, other-worldly Platonic entities. This general approach to naturalistic realism is often referred to as “Cornell Realism” in light of the fact that several of its prominent advocates studied or taught at Cornell University. Geoff Sayre-McCord (1988) has also famously dubbed it “New Wave Moral Realism.” Individual proponents of such a view may have divergent views concerning how the alleged “reduction” of the moral to the non-moral works precisely. Richard Boyd (1988), for instance, defends the view that the reductive relationship between moral and non-moral properties is a priori and necessary, but not thereby singular; moral properties might instead reduce to a “homeostatic cluster” of different overlapping non-moral properties.
Several other notable examples of scientifically-minded naturalistic moral realism have been defended. Nicholas Sturgeon (1988) has similarly argued in favor of a reduction of moral to non-moral properties, while emphasizing that a reduction at the level of the denotation or extension of our moral terms need not entail a corresponding reduction at the level of the connotation or intension of how we talk about morality. In other words, we can affirm that values just are (sets of) natural properties without thereby thinking we can or should abandon our moral language or explanatory/justificatory processes. David Brink (1989) has articulated a similar type of naturalistic moral realism which emphasizes the epistemological and motivational aspects of Cornell Realism by defending a coherentist account of justification and an externalist theory of motivation, respectively. Peter Railton (1986) has also offered a version of naturalistic moral realism according to which moral properties are reduced to non-moral properties; however, the non-moral properties in question are not so much scientific properties (or clusters of such properties), but are instead constituted by the “objective interests” of ideal epistemic agents or “impartial spectators.” Yet another variety of naturalistic moral realism has been put forward by Frank Jackson and Philip Pettit (1995). According to their view of “analytic moral functionalism,” moral properties are reducible to “whatever plays their role in mature folk morality.” Jackson’s (1998) refinement of this position—which he calls “analytic descriptivism”—elaborates that the “mature folk” properties to which moral properties are reducible will be “descriptive predicates” (although Jackson allows for the possibility that these descriptive predicates need not be physical or even scientific).
A helpful way to understand the differences between all these varieties of moral realism—namely, the Platonic versus the naturalistic versions— is by appeal to a famous argument advanced by G.E. Moore at the beginning of twentieth-century metaethics. Moore—himself an advocate of the Platonic view of morality—argued that moral properties such as “good” cannot be solely defined by scientific, natural properties such as “biological flourishing” or “social coordination” for the simple reason that, given such an alleged definition, we could still always sensibly ask whether such scientific properties were themselves truly good or not. The apparent ability to always keep the moral status of any scientific or natural thing an “open question” led Moore to reject any analysis of morality that defined moral values as anything other than simply “moral,” period. Any attempt to violate this ban must result, Moore believed, in the committing of a “naturalistic fallacy.” Moral Platonists or non-naturalistic realists tend to view Moore’s Open Question Argument as persuasive. Naturalistic realists, by contrast, argue that Moore’s argument is unconvincing on the grounds that not all truths— moral or otherwise— necessarily need to be true solely by definition. After all, such realists will argue, scientific statements such as “Water is H2O” is true even though people can (and did for a long time) question this definition.
Michael Smith (1994) has referred to this realist strategy of defining moral properties as naturalistic properties which humans discover, rather than which are simply true by definition, as “synthetic ethical naturalism.” One argument against this form of moral realism has been developed by Terry Horgan and Mark Timmons (1991), on the basis of a thought-experiment called Moral Twin Earth. This thought-experiment asks us to imagine two different worlds, the actual Earth as we know it and an alternate-reality Earth in which the same moral terms as those on the actual Earth are used to refer to the same natural/scientific properties (as the naturalistic moral realist wants to say). However, Horgan and Timmons point out that we can at the same time imagine that the moral terms on our actual Earth refer to, say, properties that maximize overall happiness (as Utilitarianism maintains), while also imagining that the moral terms used on hypothetical Moral Twin Earth refer to properties of universal rationality (as Kantian normative theorists maintain). But this would show that the moral terms used on actual Earth versus those used on Moral Twin Earth have different meanings, because they refer to different normative theories. This implies that it would be the normative theories themselves that are causing the difference in the meaning of the moral terms, not the natural properties since those are identical across the two worlds. And since naturalistic (a.k.a. Cornell) moral realism maintains that moral properties are identical at some level to natural properties, Horgan and Timmons think this thought-experiment disproves naturalistic realism. In other words, if the naturalistic realists were correct about the reduction of moral to non-moral predicates, then the Earthlings and Twin Earthlings would have to be interpreted not as genuinely disagreeing about morality, but as instead talking past one another altogether; and, according to Horgan and Timmons, this would be highly counter-intuitive, since it seems on the surface that the two parties are truly disagreeing.
Centrally at issue in the Moral Twin Earth argument is the question of how precisely naturalistic realists envision moral properties being “reduced” to natural, scientific properties in the first place. Such realists frequently invoke the metaphysical relationship of supervenience to account for the way that moral properties might connect to scientific properties. For one property or set of properties to supervene on another means that any change in one must necessarily result in a corresponding change in the other. For instance, to say that the color property of greenness supervenes on grass is to say that if two plots of grass are identical in all biological, scientific ways, then they will be green in exactly the same way too. Simon Blackburn (1993: 111-129), however, has raised a serious objection to using this notion to explain moral supervenience. Blackburn claims that if moral properties did supervene on natural properties, then we should be able to imagine two different worlds (akin to Horgan and Timmons’ Moral Twin Earth) where killing is morally wrong in one world, but not wrong in the other world— all we would have to do is imagine two worlds in which the natural, scientific facts were different. And if we can coherently imagine these two worlds, then there is no reason why we should not also be able to imagine a third “mixed” world in which killing is sometimes wrong and sometimes not. But Blackburn does not think we can in fact imagine such a strange morally mixed world— for, he believes that it is part of our conception of morality that moral wrongness or rightness does not just change haphazardly from case to case, all things being equal. As Blackburn says, “While I cannot see an inconsistency in holding this belief [namely, the view that moral propositions report factual states of affairs upon which the moral properties supervene in an irreducible way], it is not philosophically very inviting. Supervenience becomes, for the realist, an opaque, isolated, logical fact for which no explanation can be proffered” (1993: 119). In this way, Blackburn is not objecting to the supervenience relation per se, but rather to attempts to leverage this relation in favor of moral realism. For a critical examination of supervenience in principle, see Kim (1990); Blackburn attempts to refurbish his notion of supervenience in response to Kim’s critique in Blackburn (1993: 130-148).
Apart from the debate between naturalistic versus non-naturalistic moral realists, some metaethicists have explored the possibility that moral properties might be “real” without needing to be fully independent from human sensibility. According to these theories of moral realism, moral values might be akin to so-called “dispositional properties.” A dispositional property (sometimes understood as a “secondary quality”) is envisioned as a sort of latent potential or disposition, inherent in some external object or state of affairs, that becomes activated or actualized through involvement on the part of some other object or state of affairs. Thus, for example, the properties of being fragile or looking red are thought to involve a latent disposition to break under certain conditions or to appear red in a certain light. The suggestion that moral values might be similarly dispositional was made famous by John McDowell (1985). According to this view, moral properties such as “goodness” can still be real at the level of dispositional possibility (in the same way that glass is still fragile even when it is not breaking, or that blood is red even in the darkness), while still only being expressible by reference to the features (actual moral agents, in the case of morality) that would actualize those dispositions. For similar metaethical positions that seek to articulate a model of moral values which are objective, yet relational to aspects of human sensibility, see David Wiggins (1976), Sabina Lovibond (1983), David McNaughton (1988), Mark Platts (1991), Jonathan Dancy (2000), and DeLapp (2009). Arguments against this form of dispositional moral realism typically attempt to leverage alleged disanalogies between moral properties and other, non-moral dispositional properties (see especially Blackburn 1993).
Other metaethical positions reject altogether the idea that moral values— whether naturalistic, non-naturalistic, or dispositional—are real or objective in the sense of being independent from human belief or culture in the first place. Such positions instead insist on the fundamentally anthropocentric nature of morality. According to such views, moral values are not “out there” in the world (whether as scientific properties, dispositional properties, or Platonic Forms) at all, but are created by human perspectives and needs. Since these perspectives and needs can vary from person to person or from culture to culture, these metaethical theories are usually referred to as either “subjectivism” or “relativism” (sometimes moral nihilism as well; although, this is a more normatively loaded term). Many of the reasons in favor of metaethical relativism concern either a rejection of the realist ontological models discussed above, or else by appeal to psychological, epistemological, or anthropological considerations (see sections 5, 6, 7 below).
Most forms of metaethical relativism envision moral values as constructed for different, and sometimes incommensurable human purposes such as social coordination, and so forth. This view is explicitly endorsed by Gilbert Harman (1975), but may also be implicitly associated in different ways with any position that conceives of moral value as constructed by divine commands (Adams 1987; see also Divine Command Theory), idealized human rationality (Korsgaard 1996) or perspective (Firth 1952), or a social contract between competing interests (Scanlon 1982; Copp 2007). For this reason, the view is also sometimes known as moral constructivism (compare, Shafer-Landau 2003: 39-52). Furthermore, metaethical relativism must be distinguished from the non-cognitivist metaethical views considered above in section three. Non-cognitivism is a semantic thesis about what moral utterances mean—namely, that moral utterances are neither true nor false at all, but instead express prescriptive endorsements or norms. Metaethical subjectivism/relativism/constructivism, by contrast, acknowledges the semantic accuracy of cognitivism—according to which moral utterances are either true or false— but insists that such utterances are always, as it happens, false. That is, metaethical subjectivism/relativism/constructivism is a thesis about the (lack of) moral facts in the world, not a thesis about what we humans are doing when we try to talk about such facts. And since metaethical subjectivism/relativism/constructivism thinks that our cognitivist moral language is systematically false, it may also be known as moral error theory (Mackie 1977) or moral fictionalism (Kalderon 2005).
Although metaethical relativism is often depicted as embracing a valueless world of moral free-for-all, more sophisticated versions of the theory have attempted to place certain boundaries on morality in a way that still affirms the fundamental human-centeredness of values. Thus, David B. Wong (1984; 2006) has defended a view he calls pluralistic moral relativism according to which moral values are constructed differently by different social groups for different purposes; but in such a way that the degree of relativity will be nonetheless constrained by a generally uniform biological account of human nature and flourishing. A similar conception of metaethical relativism that is nonetheless grounded in some notion of universal human biological characteristics may be found in Philippa Foot (2001).
One of the most pressing questions within analytic metaethics concerns how morality engages our embodied human psychologies. Specifically, how (if at all) do moral judgments move us to act in accordance with them? Is there any reason to be moral for its own sake, and can we give any psychologically persuasive reasons to others to act morally if they do not already acknowledge such reasons? Is it part of the definition of moral concepts such as “right” and “wrong” that they should or should not be pursued, or is it possible to know that, say, murder is morally wrong, but nonetheless not recognize any reason not to murder?
Those who argue that the psychological motivation to act morally is already implicit in the judgment that something is morally good, are commonly called motivational internalists. Motivational internalists may further be divided into weak motivational internalists or strong motivational internalists, according to the strength of the motivation that they think true moral judgments come pre-packaged with. Thus, the Socratic view that evil is always performed out of ignorance (for no one, goes the argument, would knowingly do something that would morally damage their own character or soul) may be seen as a type of strong motivational internalism. Weaker versions of motivational internalism may insist only that moral judgments supply their own impetus to act accordingly, but that this impetus can (and perhaps often does) get overruled by countervailing motivational forces. Thus, Aristotle’s famous account of “weakness of the will” has been interpreted as a weaker sort of motivational internalism, according to which a person may recognize that something is morally right, and may even want at some level to do what is right, but is nonetheless lured away from such action, perhaps through stronger temptations.
Apart from what actually motivates people to act in accordance with their moral judgments, however, there is the somewhat different question about whether such judgments also supply their own intrinsic reasons to act in accordance with them. Reasons-externalists assert that sincerely judging that something is morally wrong, for instance, automatically supplies a reason for the judger that would justify her acting on the basis of that judgment, that is, a reason that is external to or independent of what the judger herself feels or wants. This need not mean that such a justification is an objectively adequate justification (that would hinge on whether one was a realist or relativist about metaethics), only that it would make sense as a response to the question “Why did you do that?” to say “Because I judged that it was morally right” (compare, McDowell 1978; Shafer-Landau 2003). According to reasons-internalists, however, judging and justifying are two conceptually different matters, such that someone could make a legitimate judgment that an action was morally wrong and still fail to recognize any reason that would justify their not performing it. Instead, sufficiently justifying moral reasons must exist independently and internally to a person’s psychological makeup (compare, Foot 1972; Williams 1979).
Closely related to the debates between internalism and externalism is the question of the metaethical status of alleged psychopaths or sociopaths. According to some moral psychologists, such individuals are characterized by a failure to distinguish moral values from merely conventional values. Several metaethicists have pointed to the apparent existence of psychopaths as support for the truth of either motivational or reasons-externalism; since psychopaths seem to be able to judge that, for instance, murder or lying are morally wrong, but either feel little or no motivation to refrain from these things, or else do not recognize any reason that should justify refraining from these things. Motivational internalists and reasons-externalists, however, have also sought to accommodate the challenge presented by the psychopath, for example, by arguing that the psychopath does not truly, robustly know that what she is doing is wrong, but only knows how to use the word “wrong” in roughly the way that the rest of society does.
A separate issue related to the internalist/externalist debate concerns the apparent psychological uniqueness of moral judgments. Specifically, at least according to the motivational internalist and reasons-externalist, moral judgments are supposed to supply, respectively, their own inherent motivations or justifying reasons, that is, their own intrinsic quality of “to-be-pursuedness.” Yet, this would seem to render morality suspiciously unique—or what J.L. Mackie (1977) calls “metaphysically queer”— since all other, non-moral judgments (for example, scientific, factual, or perceptual judgments) do not seem to provide any inherent motivations or justifications. The objection is not that non-moral judgments (for example, “This coffee is decaffeinated”) supply no motivational or justificatory force, but merely that any such motivation or justificatory force hinges on other psychological factors independent of the judgment itself (that is, the judgment about the coffee being decaffeinated will only motivate or provide a reason for you to drink it if you already have the desire to avoid caffeine). Unlike the factual judgment about the coffee, though, the moral judgment that an action is wrong is supposed to be motivating or reasons-giving regardless of the judger’s personal desires or interests. Motivational internalists or reasons-externalists have responded to this alleged “queerness” by either embracing the uniqueness of moral judgments, or else by attempting to articulate other examples of non-moral judgments which might also inherently supply motivation or reasons.
Not only has psychology been of interest to metaethicists, but metaethics has also been of interest to psychologists. The movement known as experimental philosophy (compare, Appiah 2008; Knobe and Nichols 2008)— which seeks to supplement theoretical philosophical claims with empirical attention to how people actually think and act— has yielded numerous suggestive findings about a variety of metaethical positions. For example, drawing on empirical research in social psychology, several philosophers have suggested that moral judgments, motivations, and evaluations are highly sensitive to situational variables in a way that might challenge the universality or autonomy of morality (Flanagan 1991; Doris 2002). Other moral psychologists have explored the possibilities of divergences in moral reasoning and valuation with respect to gender (Gilligan 1982), ethnicity (Markus and Kitayama 1991; Miller and Bersoff 1992), and political affiliation (McCrae 1992; Haidt 2007).
The specific debate between metaethical realism and relativism has also recently been examined from experimental perspectives. It has been argued that an empirically-informed analysis of people’s actual metaethical commitments (such as they are) is needed as a check and balance on the many frequent appeals to “commonsense morality” or “ordinary moral experience.” Realists as well as relativists have often used such appeals as a means of locating a burden of proof for or against their theories, but the actual experimental findings about lay-people’s metaethical intuitions remain mixed. For examples of realists assuming folk realism, see Brink (1989: 25), Smith (1994: 5), and Shafer-Landau (2003: 23); for examples of relativists assuming folk relativism, see Harman (1985); and for examples of relativists assuming folk realism, see Mackie (1977) and Joyce (2001: 70). William James (1896: 14) offered an early psychological description of humans as “absolutists by instinct,” although James’ specific metaethical commitments remain unclear (compare, Suckiel 1982). On the one hand, Shaun Nichols (2004) has argued that metaethical relativism is particularly pronounced among college undergraduates. On the other hand, William Rottschaefer (1999) has argued instead that moral realism is empirically supported by attention to effective child-rearing practices.
Another psychological topic that has been of interest to metaethicists is the nature and significance of moral emotions. One aspect of this debate has been the perennial question of whether it is fundamentally rationality which supplies our moral distinctions and motivations, or whether these are instead generated or conditioned by passions and sentiments which are separate from reason. (See section 5a above for more on this debate.) In particular, this debate was one of the dividing issues in eighteenth-century ethics between the so-called Intellectualist School (for example, Ralph Cudworth, William Wollaston, and so forth), which stressed the rational grasp of certain “moral fitnesses” on the one hand, and the Sentimentalist School (for example, Shaftesbury, David Hume, and so forth), which stressed the role played by our non-cognitive “moral sense” on the other hand (compare, Selby-Bigge 1897; see also Darwall 1995 for an application of these views to contemporary metaethical debates about moral motivation and knowledge).
Aside from motivational and epistemological issues, however, moral emotions have been of interest to metaethicists in terms of the apparent phenomenology they furnish. In particular, attention has been given to which metaethical theory, if any, better accommodates the existence of self-regarding “retributive emotions,” such as guilt, regret, shame, and remorse. Martha Nussbaum (1986) and Bernard Williams (1993), for example, have drawn compelling attention to the powerful emotional responses characteristic of Greek tragedy, and the so-called moral luck that such experiences seem to involve. According to Williams (1965), sensitivity to moral dilemmas will reveal a picture of the moral sphere according to which even the best-intentioned actions may leave moral “stains” or “remainders” on our character. Michael Stocker (1990) extends this analysis of moral emotions to more general scenarios of ineliminable conflicts between values, and Kevin DeLapp (2009) explores the specific implications of tragic emotions for theories of moral realism. By contrast, Gilbert Harman (2009) has argued against the moral (let alone metaethical) significance of guilt feelings. Patricia Greenspan (1995), however, has leveraged the phenomenology of guilt (particularly as she identifies it in cases of unavoidable wrong-doing) as a defense of moral realism. For more perspectives on the nature and significance of moral dilemmas, see Gowans (1987). For more on the philosophy of emotions in general, see Calhoun & Solomon (1984).
Analytic metaethics also explores questions of how we make moral judgments in the first place, and how (if at all) we are able to know moral truths. The field of moral epistemology can be divided into questions about what moral knowledge is, how moral beliefs can be justified, and where moral knowledge comes from.
Moral epistemology explores the contours of moral knowledge itself—not the specific content of individual moral beliefs, but the conceptual characteristics of moral beliefs as a general epistemic category. Here, one of the biggest questions concerns whether moral knowledge involves claims about generic moral values such “goodness” or “wrongness” (so-called “thin” moral concepts) or whether moral knowledge may be obtained at the somewhat more concrete level of concepts such as “courage”, “intemperance”, or “compassion” (which seem to have a “thicker” descriptive content). The general methodology of the thick-thin distinction was popularized by Clifford Geertz (1973) following the introduction of the terminology by Gilbert Ryle (1968). Its specific application to metaethics, however, is due largely to Bernard Williams’ (1985) famous argument that genuine (that is, action-guiding) moral knowledge can only exist at the thicker level of concrete moral concepts. This represents what Williams called the “limits of philosophy,” since philosophical theorizing aims instead at more abstract, thin moral principles. Furthermore, according to Williams, this epistemological point about the thickness of moral knowledge has important implications for the ontology of moral values; namely, Williams defends a kind metaethical relativism on the grounds that, even if thin moral concepts such as “goodness” are universal across different societies, the more specific thick concepts that he thinks really matter to us morally are specified in often divergent ways, for example, two societies that both praise “goodness” may nonetheless have quite different understandings of what counts as “bravery”.
Emphasis on thick moral concepts has been prevalent in virtue ethics in general. For example, Alasdair MacIntyre (1984) has famously defended the neo-Aristotelian view that ethics must be grounded in a “tradition” that is coherent and stable enough to thickly specify virtues and virtuous role-models. Indeed, part of the challenge that MacIntyre sees facing contemporary societies is that increased cross-cultural interconnectedness has fomented a fragmentation of traditional virtue frameworks, engendering a moral cacophony that threatens to undermine moral motivation, knowledge, and even our confidence in what counts as “rational” (MacIntyre 1988). More recently, David B. Wong (2000) has offered a contemporary Confucian response to MacIntyre-style worries about moral fragmentation in democratic societies, arguing that pluralistic societies may still retain a coherent tradition in the form of civic “rituals” such as voting.
A related metaethical issue concerns the scope of moral judgments and the extent to which such judgments may ever legitimately be made universally or whether they ought instead to be indexed to particular situations or contexts; this view is commonly known as moral particularism (compare, Hooker and Little 2000; Dancy 2006).
Metaethical positions may also be divided according to how they envision the requirements of justifying moral beliefs. Traditional philosophical accounts of epistemological justification are requisitioned and modified specifically to accommodate moral knowledge. A popular version of a theory of moral-epistemic justification may be called metaethical foundationalism—the view that moral beliefs are epistemically justified by appeal to other moral beliefs, until this justificatory process terminates at some bedrock beliefs whose own justifications are “self-evident.” By contrast, metaethical coherentism requires for the epistemic justification of a moral belief only that it be part of a network of other beliefs, all of which are jointly consistent (compare, Sayre-McCord 1985; Brink 1989). Mark Timmons (1996) also defends a form of metaethical contextualism, according to which justification is determined either by reference to some relevant set of epistemic practices and norms (a view Timmons calls “normative contextualism” and which also bears strong similarity with the movement known as virtue epistemology), or else by reference to some more basic beliefs (a view Timmons calls “structural contextualism” and which seems very similar to foundationalism). Kai Nielsen (1997) has offered another account of contextualist ethical justification with reference to internal systems of religious belief and explanation (see Religious Epistemology).
Early 21st century work in metaethics has gone into exploring precisely what is involved in the “self-evidence” envisioned by foundationalist accounts of moral justification. Roger Crisp (2002) notes that most historical deployments of “self-evidence” in moral epistemology tended to associate it with obviousness or certainty. For instance, the ethical intuitionism of much of the early part of the 20th century (particularly following Moore’s Open Question Argument, as discussed above) tended to adopt this stance toward moral truths (compare, Stratton-Lake 2002). It was this understanding of metaethical foundationalism which led J.L. Mackie (1977) to object to what he saw as the “epistemological queerness” of realist or objectivist ontology. In later years, though, more sophisticated versions of metaethical foundationalism have sought interpretations of the “self-evidence” of basic, justifying moral beliefs in a way that need not involve dogmatic or naive assumptions of obviousness; but might instead require only that such basic moral beliefs are epistemically justified non-inferentially (Audi 1999; Shafer-Landau 2003). One candidate for what it might mean for a moral belief to be epistemically justified non-inferentially has involved an appeal to the model of perceptual beliefs (Blum 1991; DeLapp 2007). Non-moral perceptual beliefs are typically viewed as decisive vis-à-vis justification, provided the perceiver is in appropriate, reliable perceptual conditions. In other words, according to this view, the belief “There is a coffee mug in front of me” is epistemically justified just in case one takes oneself to be perceiving a coffee mug and provided that one is not suffering from hallucinations, merely using one’s peripheral vision, or in a dark room. (See also epistemology of perception.)
Although not addressing this issue of moral perception, Russ Shafer-Landau (2003) has argued on a related note that, ultimately, the difference between metaethical naturalism versus non-naturalism (as described in section 4a) might not be so much ontological or metaphysical, as it is epistemological. Specifically, according to Shafer-Landau, metaethical naturalists are those who require that the epistemic justification of moral beliefs be inferred on the basis of other non-moral beliefs about the natural world; whereas metaethical non-naturalists allow for the epistemic justification of moral beliefs to be terminated with some brute moral beliefs that are themselves sui generis.
Aside from the questions of the scope, source, and justification of moral beliefs, another epistemological facet of metaethics concerns the explanatory role that putative moral properties play with respect to moral beliefs. A useful way to frame this issue is by reference to Roderick Chisholm’s (1981) influential point about direct attribution. Chisholm noted that we refer to external things by attributing properties to them directly. Using this language, we may frame the metaethical question as whether or not our attribution of moral properties to actions, characters, and so forth, is “direct” (that is, external). Gilbert Harman (1977) has famously argued that our attribution of moral properties is not direct in this way. According to Harman, objective moral properties, if they existed, would be explanatorily impotent, in the sense that our specific, first-order moral beliefs can already be sufficiently accounted for by appealing to naturalistic, psychological, or perceptual factors. For example, if we were to witness people gleefully torturing a defenseless animal, we would likely form the belief that their action is morally wrong; but, according to Harman, we could adequately explain this moral evaluation solely by citing various sociological, emotional, behavioral, and perceptual causal factors, without needing to posit any mysterious additional properties that our evaluation is also channeling. This explanatory impotence, Harman believes, constitutes a serious disanalogy between, on the one hand, the role that abstract metaethical properties play in actual (first-order) moral judgments and, on the other hand, the role that theoretical scientific entities play in actual (first-order) perceptual judgments. For example, imagine that we were witnessing the screen-representation of a particle accelerator, instead of people torturing an animal. Although we do not literally see a subatomic particle on the screen (rather, we see a bunch of pixels which we interpret as referring to a subatomic particle) any more than we literally see “wrongness” floating around the animal-torturers, the essential difference between the two cases is that the additional abstract belief that there really are subatomic particles is necessary to explain why we infer them on the basis of screen-pixels; whereas, according to Harman, the alleged property of objective “wrongness” is unnecessary to explain why we disapprove of torture. Nicholas Sturgeon (1988), however, has argued contrary to Harman that second-order metaethical properties do play legitimate explanatory roles, for the simple reason that they are cited in people’s justification of why they find the torturing of animals morally wrong. Thus, for Sturgeon, what will count as the “best explanation” of a phenomenon—namely, the phenomenon of morally condemning the torturing of an animal—must be understood in the broader context of our overall explanatory goals, one of which will be to make sense of why we think that torturing animals is objectively wrong in the first place.
Although much of analytic metaethics concerns rarified debates that can often be highly abstracted from actual, applied moral concerns, several metaethical positions have also drawn heavily on cultural anthropological considerations to motivate or flesh-out their views. After all, as discussed above in section one, it has often been actual, historical moments of cultural instability or diversity that have stimulated metaethical reflection on the nature and status of moral values.
One of the most influential anthropological aspects of metaethics concerns the apparent challenge that pervasive and persistent cross-cultural moral disagreement would seem to present for moral realists or objectivists. If, as the realist envisions, moral values were truly universal and objective, then why is it the case that so many different people seem to have such drastically different convictions about what is right and wrong? The more plausible explanation of the fact that people persistently disagree about moral matters, so the argument goes, is simply that there are no objective moral truths capable of settling their dispute. As opposed to the apparent convergence in other, non-moral realms of dispute (for example, scientific, perceptual, and so forth), moral disagreement seems both ubiquitous and largely resistant to rational adjudication. J.L. Mackie (1977) leverages these features of moral disagreement to motivate what he calls The Argument from Relativity. This argument begins with the descriptive, anthropological observation that different cultures endorse different moral values and practices, and then argues as an inference to the most likely explanation of this fact that metaethical relativism best accounts for such cross-cultural discrepancies.
Mackie refers to such cross-cultural moral differences as “well-known” and, indeed, it seems prima facie obvious that different cultures have different practices. Mackie’s argument, however, seeks a diversity of practices that is not merely descriptively different on the surface, but that is deeply morally different, if not ultimately incommensurable. James Rachels (1986) describes the difference between surface, descriptive difference versus deep, moral difference by reference to the well-worn example of the traditional Inuit practice of leaving elders to die from exposure. Although at the surface level of description, this practice seems radically different from contemporary Western attitudes toward the ethical treatment of the elderly (pervasive elder-abuse notwithstanding), the underlying moral justification for the practice—namely, that material resources are limited, the elders themselves choose this fate, the practice is a way for elders to die with dignity, and so forth—sounds remarkably similar in spirit to the familiar sorts of moral values contemporary Westerners invoke.
Cultural anthropology itself has generated controversy regarding the extent as well as the metaethical significance of moral differences at the deep level of fundamental justifications and values. Responding to both the assumption of cultural superiority as well as the Romantic attraction to viewing exotic cultures as Noble Savages, early twentieth-century anthropologists frequently adopted a methodology of relativism, on the grounds that accurate empirical information would be ignored if a cultural difference was examined with any a priori moral bias. An early exponent of this anthropological relativism was William Graham Sumner (1906) who, reflecting on what he referred to as different cultural folkways (that is, traditions or practices), claimed provocatively that, “the folkways are their own warrant.” Numerous anthropologists who were influenced by Franz Boas (1911) adopted a similar refusal to morally evaluate cross-cultural differences, culminating in an explicit embrace of metaethical relativism by anthropologists such as Ruth Benedict (1934) and Melville Herskovits (1952).
Several notable philosophers in the Continental tradition have also affirmed the sociological and anthropological relativism mentioned above. Specifically, the deconstructivism of Jacques Derrida, with its suspicion regarding “logocentric” biases, might be understood as a warning against metaethical objectivism. Instead, a deconstructivist might argue that ethical meaning (like all meaning) is characterized by what Derrida called différance, that is, an intractable un-decidability. (See Derrida (1996), however, for the possibility of a less relativistic deconstructivist ethics.) Other contemporary Continental approaches have similarly eschewed realism. For example, Mary Daly (1978) has defended a radical feminist critique of the sexual biases inherent in how we talk about values. For other perspectives on the possible tensions between feminism and the metaethics of cultural diversity, see Okin (1999) and Nussbaum (1999: 29-54). Michel Foucault (1984) is also well-known for his general criticism of the uses and abuses of power in the construction and expression of moral valuations pertaining to mental health, sexuality, and criminality. Similar critiques concerning the transplantation of a particular set of cultural values to other cultural contexts have been expressed by a number of post-colonialists and literary theorists, who have theorized about the imperialism, silencing (Spivak 1988), Orientalism (Said 1978), and cultural hybridity (Bhabha 1994) such moral universalism may involve.
For all the apparent cross-cultural moral diversity, however, there have also been several suggestions against extending anthropological relativism to the metaethical level. First, a variety of empirical studies seem to suggest that the degree of moral similarity at the deep level of fundamental justifications and values may be greater than Boas and his students anticipated. Thus, for example, Jonathan Haidt (2004) has argued that cross-cultural differences show strong evidence of resolving around a finite number of basic moral values (what Haidt calls “modules”). From a somewhat more abstract perspective, Thomas Kasulis (2002) has also defended the view that cross-cultural differences can be sorted into two fundamental “orientations.” However, the congealing of cross-cultural differences around a small, finite number of basic values need not prove moral realism—for, those basic values may themselves still be ultimately relative to human needs and perspectives (compare, Wong 2006).
There are also several theoretical challenges to inferring metaethical relativism from anthropological differences. For one thing, as Michele Moody-Adams (1997) has argued, metaethical assessments about the degree or depth of moral differences are “empirically underdetermined” by the anthropological description of the practices themselves. For example, anthropological data about the moral content of a culturally different practice may be biased on behalf of the cultural informant who supplies the data or characterization. Similar critiques of cross-cultural moral relativism have leveraged what is known as The Principle of Charity—the hermeneutic insight that differences must at least be commensurable enough to even be framed as “different” from one another in the first place. Thus, goes the argument, if cross-cultural moral differences were so radically different as to be incomparable to one another, we could never truly morally disagree at all; we would instead be simply “talking past” one another (compare Davidson 2001). Much of our ability to translate between the moral practices of one culture and another—an ability central to the very enterprise of comparative philosophy—presupposes that even moral differences are still recognizably moral differences at root.
In addition to accommodating or accounting for the existence of moral disagreements, metaethics has also been thought to provide some insight concerning how we should respond to such differences at the normative or political level. Most often, debates concerning the morally appropriate response to moral differences have been framed against analyses concerning the relationship between metaethics and toleration. On the one hand, tolerating practices and values with which one might disagree has been a hallmark of liberal democratic societies. Should this permissive attitude, however, be extended indiscriminately to all values and practices with which one disagrees? Are some moral differences simply intolerable, such that it would undermine one’s own moral convictions to even attempt to tolerate them? More vexingly, is it conceptually possible or desirable to tolerate the intolerance of others (a paradox sometimes referred to as the Liberal’s Dilemma)? Karl Popper (1945) famously argued against the toleration of intolerance, which he saw as an overly-indulgent extension of the concept and one which would undermine the “open society” he believed to be a prerequisite for toleration in the first place. By contrast, John Rawls (1971) has argued that toleration—even of intolerance—is a constitutive part of justice (derivable from what Rawls calls the “liberty principle” of justice), such that failure to be tolerant would entail failure to satisfy one of the requirements of justice. Rawls emphasizes, however, that genuine toleration need not lead to utopia or agreement, and that it is substantially different from a mere modus vivendi, that is, simply putting up with one another because we are powerless to do otherwise. According to Rawls, true toleration requires that we seek to bring our differences into an “overlapping consensus,” which he claims will be possible due to an inherent incompleteness and “looseness in our comprehensive views” (2001: 193).
The value of toleration is often claimed as an exclusive asset of individual metaethical theories. For example, metaethical relativists frequently argue that only by acknowledging the ultimately subjective and conventional nature of morality can we make sense of why we should not morally judge others’ values or practices—after all, according to relativism, there would be no culture-transcendent standard against which to make such judgments. For this reason, Neil Levy claims that, “The perception that relativism promotes, or is the expression of, tolerance of difference is almost certainly the single most important factor in explaining its attraction” (2002: 56). Indeed, even metaethical realists (Shafer-Landau 2004: 30-31) often observe that undergraduate endorsements of relativism seem to be motivated by an anxiety about condemning foreign practices. Despite the apparent leeway with respect to moral differences that metaethical relativism would appear to allow, several realists have argued, by contrast, that relativism could equally be as compatible with intolerance. After all, goes the argument, if nothing is objectively or universally morally wrong, then a fortiori intolerant practices cannot be said to be universally or objectively wrong either. People or cultures who do not approve of an intolerant practice would only be reflecting their own culture’s commitment to toleration (compare Graham 1996). For this reason, several metaethicists have argued that realism alone can support the commitment to toleration as a universal value—such that intolerance can be morally condemned—because only realism allows for the existence of universal, objective moral values (compare, Shafer-Landau 2004: 30-33). Nicholas Rescher (1993) expresses a related worry about what he calls “indifferentism”—a nihilistic nonchalance regarding specific ethical commitments that might be occasioned by an embrace of metaethical relativism. Rescher’s own solution to the potential problem of indifferentism (he calls his view “contextualism” or “perspectival rationalism”) involves the recognition of the reasons-giving nature of circumstances, such that different situations may supply their own “local” justifications for particular political or moral commitments.
The question of which metaethical theory—realism or relativism—can lay better claim to toleration, however, has been complicated by reflection on what “toleration” truly involves and whether it is always, in fact, a moral value. Andrew Cohen (2004), for instance, has argued that “toleration” by definition must involve some negative evaluation of the practice or value that is tolerated. Thus, on this analysis, it would seem that one may only tolerate that which one finds intolerable. This has led philosophers such as Bernard Williams (1996) to question whether toleration—understood as requiring moral disapproval—is even possible, let alone whether it is truly a moral value itself. (For more discussion on toleration, see Heyd 1996.) In a related vein, Richard Rorty (1989) has argued that what a society finds intolerant is itself morally constitutive of that society’s identity, and that recognition of the metaethical contingency of one’s particular social tolerance might itself provide an important sense of political “solidarity.” For these reasons, other philosophers have considered alternative understandings of toleration that might be more amenable to particular metaethical theories. David B. Wong (2006: 228-272), for example, has developed an account of what he calls accommodation, according to which even relativists may still share a higher-order commitment to the need for different practices and values to be arranged in such a way as to minimize social and political friction.
- Adams, Robert. (1987). The Virtue of Faith and Other Essays in Philosophical Theology. Oxford University Press.
- Altham, J.E.J. (1986) “The Legacy of Emotivism,” in Macdonald & Wright, eds. Fact, Science, and Morality. Oxford University Press, 1986.
- Appiah, Kwame Anthony. (2008). Experiments in Ethics. Harvard University Press.
- Audi, Robert. (1999). “Moral Knowledge and Ethical Pluralism,” in Greco and Sosa, eds. Blackwell Guide to Epistemology, 1999, ch. 6.
- Ayer, A.J. (1936). Language, Truth and Logic. Gollancz Press.
- Benedict, Ruth. (1934). “Anthropology and the Abnormal,” Journal of General Psychology 10: 59-79.
- Beyleveld, Deryck. (1992). The Dialectical Necessity of Morality. University of Chicago Press.
- Bhabha, Homi. (1994). The Location of Culture. Routledge Press.
- Blackburn, Simon. (1984). Spreading the Word. Oxford University Press.
- Blackburn, Simon. (1993). Essays in Quasi-Realism. Oxford University Press.
- Blair, Richard. (1995). “A Cognitive Developmental Approach to Morality: Investigating the Psychopath,” Cognition 57: 1-29.
- Bloomfield, Paul. (2001). Moral Reality. Oxford University Press.
- Blum, Lawrence. (1991). “Moral Perception and Particularity,” Ethics 101 (4): 701-725.
- Boas, Franz. (1911). The Mind of Primitive Man. Free Press.
- Boisvert, Daniel. (2008). “Expressive-Assertivism,” Pacific Philosophical Quarterly 89 (2): 169-203.
- Boyd, Richard. (1988). “How to be a Moral Realist,” in Essays on Moral Realism, ed. Geoffrey Sayre-McCord. Cornell University Press 1988, ch. 9.
- Boylan, Michael. (2004). A Just Society. Rowman & Littlefield Publishers.
- Boylan, Michael, ed. (1999). Gewirth: Critical Essays on Action, Rationality, and Community. Rowman & Littlefield Publishers.
- Brink, David. (1989). Moral Realism and the Foundations of Ethics. Cambridge University Press.
- Calhoun, Cheshire and Solomon, Robert, eds. What Is An Emotion? Oxford University Press.
- Chisholm, Roderick. (1981). The First Person: An Essay on Reference and Intentionality. University of Minnesota Press.
- Cohen, Andrew. (2004). “What Toleration Is,” Ethics 115: 68-95.
- Copp, David. (2007). Morality in a Natural World. Cambridge University Press.
- Daly, Mary. (1978). Gyn/Ecology: The Metaethics of Radical Feminism. Beacon Press.
- Dancy, Jonathan. (2006). Ethics without Principles. Oxford University Press.
- Dancy, Jonathan. (2000). Practical Reality. Oxford University Press.
- Darwall, Stephen. (2006). “How Should Ethics Relate to Philosophy?” in Metaethics after Moore, eds. Terry Horgan & Mark Timmons. Oxford University Press 2006, ch.1.
- Darwall, Stephen. (1995). The British Moralists and the Internal ‘Ought’. Cambridge University Press.
- Davidson, Donald. (2001). Inquiries into Truth and Interpretation. Clarendon Press.
- DeLapp, Kevin. (2009). “Les Mains Sales Versus Le Sale Monde: A Metaethical Look at Dirty Hands,” Essays in Philosophy 10 (1).
- DeLapp, Kevin. (2009). “The Merits of Dispositional Moral Realism,” Journal of Value Inquiry 43 (1): 1-18.
- DeLapp, Kevin. (2007). “Moral Perception and Moral Realism: An ‘Intuitive’ Account of Epistemic Justification,” Review Journal of Political Philosophy 5: 43-64.
- Derrida, Jacques. (1996). The Gift of Death. University of Chicago Press.
- Divers, John and Miller, Alexander. (1994). “Why Expressivists about Value Should Not Love Minimalism about Truth,” Analysis 54 (1): 12-19.
- Dreier, James. (2004). “Meta-ethics and the Problem of Creeping Minimalism,” Philosophical Perspectives 18: 23-44.
- Doris, John. (2002). Lack of Character. Cambridge University Press.
- Dworkin, Ronald. (1996). “Objectivity and Truth: You’d Better Believe It,” Philosophy and Public Affairs 25 (2): 87-139.
- Firth, Roderick. (1952). “Ethical Absolutism and the Ideal Observer Theory,” Philosophy and Phenomenological Research 12: 317-345.
- Flanagan, Owen. (1991). Varieties of Moral Personality. Harvard University Press.
- Foot, Philippa. (2001). Natural Goodness. Clarendon Press.
- Foot, Philippa. (1972). “Morality as a System of Hypothetical Imperatives,” Philosophical Review 81 (3): 305-316.
- Foucault, Michel. (1984). The Foucault Reader, ed. Paul Rabinow. Pantheon Books.
- Geach, Peter. (1960). “Ascriptivism”, Philosophical Review 69: 221-225.
- Geach, Peter. (1965). “Assertion”, Philosophical Review 74: 449-465.
- Geertz, Clifford. (1973). “Thick Description: Toward an Interpretative Theory of Culture,” in The Interpretation of Cultures: Selected Essays. Basic Books, 1973: 3-30.
- Gewirth, Alan. (1980). Reason and Morality. University of Chicago Press.
- Gibbard, Alan. (1990). Wise Choices, Apt Feelings. Harvard University Press.
- Gilligan, Carol. (1982). In a Different Voice. Harvard University Press.
- Gordon, John-Stewart, ed. (2009). Morality and Justice: Reading Boylan’s A Just Society. Lexington Books.
- Gowans, Christopher, ed. (1987). Moral Dilemmas. Oxford University Press.
- Graham, Gordon. (1996). “Tolerance, Pluralism, and Relativism,” in David Heyd, ed. Toleration: An Elusive Virtue. Princeton University Press, 1996: 44-59.
- Greenspan, Patricia. (1995). Practical Guilt: Moral Dilemmas, Emotions, and Social Norms. Oxford University Press.
- Haidt, Jonathan and Graham, Jesse. (2007). “When Morality Opposes Justice: Conservatives Have Moral Intuitions and Liberals May Not Recognize,” Social Justice Research 20 (1): 98-116.
- Haidt, Jonathan and Joseph, Craig. (2004). “Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues,” Daedalus: 55-66.
- Hare, R.M. (1982). Moral Thinking. Oxford University Press.
- Harman, Gilbert. (2009). “Guilt-Free Morality,” Oxford Studies in Metaethics 4: 203-214.
- Harman, Gilbert. (1977). The Nature of Morality. Oxford University Press.
- Harman, Gilbert. (1985). “Is There A Single True Morality?” in David Copp and David Zimmerman, eds. Morality, Reason and Truth. Rowman & Littlefield, 1985: 27-48.
- Harman, Gilbert. (1975). “Moral Relativism Defended,” Philosophical Review 85 (1): 3-22.
- Heinaman, Robert, ed. (1995). Aristotle and Moral Realism. Westview Press.
- Herskovits, Melville. (1952). Man and His Works. A.A. Knopf.
- Heyd, David, ed. (1996). Toleration: An Elusive Virtue. Princeton University Press.
- Hooker, Brad and Little, Margaret, eds. (2000). Moral Particularism. Oxford University Press.
- Horgan, Terence and Timmons, Mark. (1991). “New Wave Moral Realism Meets Moral Twin Earth,” Journal of Philosophical Research 16: 447-465.
- Hudson, W.D. (1967). Ethical Intuitionism. St. Martin’s Press.
- Hume, David. (1740). A Treatise on Human Nature. L.A. Selby-Bigge, ed. Oxford University Press, 2e (1978).
- Hurka, Thomas. (2003) “Moore in the Middle,” Ethics 113 (3): 599-628.
- Jackson, Frank and Pettit, Philip. (1995). “Moral Functionalism and Moral Motivation,” Philosophical Quarterly 45: 20-40.
- James, William. (1896). “The Will to Believe,” in The Will to Believe and Other Essays in Popular Philosophy. Dover Publishers, 1956.
- Joyce, Richard. (2001). The Myth of Morality. Cambridge University Press.
- Kalderon, Mark, ed. (2005). Moral Fictionalism. Clarendon Press.
- Kasulis, Thomas. (2002). Intimacy or Integrity: Philosophy and Cultural Difference. University of Hawai’i Press.
- Kjellberg, Paul and Ivanhoe, Philip, eds. (1996). Essays on Skepticism, Relativism, and Ethics in the Zhuangzi. SUNY Press.
- Knobe, Joshua and Nichols, Shuan, eds. (2008). Experimental Philosophy. Oxford University Press.
- Korsgaard, Christine. (1996). The Sources of Normativity. Cambridge University Press.
- Kramer, Matthew. (2009). Moral Realism as a Moral Doctrine. Wiley-Blackwell Publishers.
- Levy, Neil. (2002). Moral Relativism: A Short Introduction. Oneworld Publications.
- Lovibond, Sabina. (1983). Realism and Imagination in Ethics. Minnesota University Press.
- MacIntyre, Alasdair. (1988). Whose Justice? Which Rationality? Notre Dame Press.
- MacIntyre, Alasdair. (1984). After Virtue, 2e. Notre Dame Press.
- Mackie, J.L. (1977). Ethics: Inventing Right and Wrong. Penguin Books.
- Markus, H.R. and Kitayama, S. (1991). “Culture and the Self: Implications for Cognition, Culture, and Motivation,” Psychological Review 98: 224-253.
- McCrae, R.R. and John, O.P. (1992). “An Introduction to the Five-Factor Model and Its Applications,” Journal of Personality 60: 175-215.
- McDowell, John. (1985) “Values and Secondary Qualities,” in Morality and Objectivity, ed. Ted Honderich. Routledge (1985): 110-29.
- McDowell, John. (1978). “Are Moral Requirements Hypothetical Imperatives?” Proceedings of the Aristotelian Society, supp. Vol. 52: 13-29.
- McNaughton, David. (1988). Moral Vision. Blackwell Publishing.
- Miller, J.G. and Bersoff, D.M. (1992). “Culture and Moral Judgment: How Are Conflicts between Justice and Interpersonal Relationships Resolved?” Journal of Personality and Social Psychology 62: 541-554.
- Moody-Adams, Michele. (1997). Fieldwork in Familiar Places. Harvard University Press.
- Moore, G.E. (1903). Principia Ethica. Cambridge University Press.
- Murdoch, Iris. (1970). The Sovereignty of the Good. Routledge and Kegan Paul Press.
- Neu, Jerome. (2000). A Tear is an Intellectual Thing. Oxford University Press.
- Nichols, Shaun. (2004). “After Objectivity: An Empirical Study of Moral Judgment,” Philosophical Psychology 17: 5-28.
- Nielsen, Kai. (1997). Why Be Moral? Prometheus Books.
- Nussbaum, Martha. (1999). Sex and Social Justice. Oxford University Press.
- Nussbaum, Martha. (1986). The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy. Cambridge University Press.
- Okin, Susan Moller. (1999). Is Multiculturalism Bad for Women? Princeton University Press.
- Plato. Republic, trans. G.M.A. Grube, in The Complete Works of Plato, ed. John Cooper. Hackett 1997.
- Plato. Gorgias, trans. Donald Zeyl, in The Complete Works of Plato, ed. John Cooper. Hackett 1997.
- Platts, Mark. (1991). Moral Realities: An Essay in Philosophical Psychology. Routledge Press.
- Putnam, Hilary. (1981). Reason, Truth, and History. Cambridge University Press.
- Rachels, James. (1986). “The Challenge of Cultural Relativism,” in Rachels, The Elements of Moral Philosophy. Random House (1999): 20-36.
- Railton, Peter. (1986). “Moral Realism,” Philosophical Review 95: 163-207.
- Ramsey, Frank. (1927). “Facts and Propositions,” Aristotelian Society Supplementary Vol. 7: 153-170.
- Rawls, John. (2001). Justice As Fairness: A Restatement. Belknap Press.
- Rawls, John. (1971). A Theory of Justice. Belknap Press.
- Regan, Tom. (1986). Bloomsbury’s Prophet. Temple University Press.
- Rescher, Nicholas. (1993). Pluralism: Against the Demand for Consensus. Clarendon Press.
- Ridge, Michael. (2006). “Ecumenical Expressivism: Finessing Frege,” Ethics 116 (2): 302-336.
- Rorty, Richard. (1989). Contingency, Irony, and Solidarity. Cambridge University Press.
- Ross, W.D. (1930). The Right and the Good. Oxford University Press.
- Rottshaefer, William. (1999). “Moral Learning and Moral Realism: How Empirical Psychology Illuminates Issues in Moral Ontology,” Behavior and Philosophy 27: 19-49.
- Ryle, Gilbert. (1968). “What is Le Penseur Doing?” in Collected Papers 2 (1971): 480-496.
- Said, Edward. (1978). Orientalism. Vintage Books.
- Sayre-McCord, Geoffrey. (1985). “Coherence and Models for Moral Theorizing,” Pacific Philosophical Quarterly 66:
- Scanlon, Thomas. (1995) “Fear of Relativism,” in Virtues and Reasons, eds. Hursthouse, Lawrence, Quinn. Oxford University Press (1995): 219-245.
- Schroeder, Mark. (2008). “What is the Frege-Geach Problem?” Philosophy Compass 3 (4): 703-720.
- Schueler, G.F. (1988). “Modus Ponens and Moral Realism,” Ethics 98: 492-500.
- Selby-Bigge, L.A., ed. (1897). The British Moralists of the Eighteenth-century. Clarendon Press.
- Shafer-Landau, Russ. (2004). Whatever Happened to Good and Evil? Oxford University Press.
- Shafer-Landau, Russ. (2003). Moral Realism: A Defense. Oxford University Press.
- Smith, Michael. (1994). The Moral Problem. Blackwell Publishers.
- Smith, Michael. (1994). “Why Expressivists about Value Should Love Minimalism about Truth,” Analysis 54 (1): 1-11.
- Spence, Edward. (2006). Ethics within Reason: A Neo-Gewirthian Approach. Lexington Books.
- Steigleder, Klaus. (1999). Grundlegung der normativen Ethik: Der Ansatz von Alan Gewirth. Alber Publishers.
- Stephen, Leslie. (1947). English Literature and Society in the Eighteenth Century. Reprinted by University Press of the Pacific, 2003.
- Stevenson, C.L. (1944). Ethics and Language. Yale University Press.
- Stocker, Michael. (1990). Plural and Conflicting Values. Oxford University Press.
- Spivak, Gayatri Chakravorty. (1988). “Can the Subaltern Speak?” in Marxism and the Interpretation of Culture, eds. C. Nelson and L. Grossberg. Macmillan Books, 1988: 271-313.
- Stratton-Lake, Philip, ed. (2002). Ethical Intuitionism: Re-Evaluations. Oxford University Press.
- Sturgeon, Nicholas. (1988). “Moral Explanations,” in Essays on Moral Realism, ed. GeoffreySayre-McCord. Cornell University Press 1988, ch. 10.
- Sturgeon, Nicholas. (1986). “Harman on Moral Explanations of Natural Facts,” Southern Journal of Philosophy 24: 69-78.
- Suckiel, Ellen Kappy. (1982). The Pragmatic Philosophy of William James. Notre Dame Press.
- Sumner, William Graham. (1906) Folkways. Ginn Publishers.
- Tännsjö, Torbjörn. (1990). Moral Realism. Rowman & Littlefield Publishers.
- Timmons, Mark. (1996). “A Contextualist Moral Epistemology,” in Sinnott-Armstrong, ed. Moral Knowledge? Oxford University Press, 1996.
- Wiggins, David. (1976). “Truth, Invention, and the Meaning of Life,” in Wiggins, Needs, Values, Truth, 3e. Oxford University Press, 2002: 87-138.
- Williams, Bernard. (1996). “Toleration: An Impossible Virtue?” in David Heyd, ed. Toleration: An Elusive Virtue. Princeton University Press, 1996: 28-43.
- Williams, Bernard. (1993). Shame and Necessity. University of California Press.
- Williams, Bernard. (1985). Ethics and the Limits of Philosophy. Harvard University Press.
- Williams, Bernard. (1979). “Internal and External Reasons,” in Rational Action, ed. Ross Harrison. Cambridge University Press, 1979: 17-28.
- Williams, Bernard. (1965). “Ethical Consistency,” Proceedings of the Aristotelian Society, suppl. Vol. 39: 103-124.
- Wong, David B. (2006). Natural Moralities: A Defense of Pluralistic Relativism. Oxford University Press.
- Wong, David B. (2000). “Harmony, Fragmentation, and Democratic Ritual,” in Civility, ed. Leroy S. Rouner. University of Notre Dame Press, 2000: 200-222.
- Wong, David B. (1984). Moral Relativity. University of California Press.
- Wright, Crispin. (1992). Truth and Objectivity. Harvard University Press.
- Fisher, Andrew and Kirchin, Simon, eds. (2006). Arguing about Metaethics. Routledge Press.
- Harman, Gilbert and Thomson, J.J. (1996). Moral Relativism and Moral Objectivity. Blackwell Publishers.
- Miller, Alexander. (2003). An Introduction to Contemporary Metaethics. Polity Press.
- Moser, Paul and Carson, Thomas, eds. (2001). Moral Relativism: A Reader. Oxford University Press.
- Sayre-McCord, Geoffrey, ed. (1988). Essays on Moral Realism. Cornell University Press.
- Shafer-Landau, Russ, ed. (2001-2010). Oxford Studies in Metaethics, Vol. 1-5. Oxford University Press.
Kevin M. DeLapp
U. S. A. |
With these worksheets your students will:
*see number and number word
*trace and write the number
*fill in a ten/twenty frame for that number
*find that number among other numbers
*find the number on a number line
**Each area has an "I can" statement to go with it.
These worksheets are for each individual number from
0-20. There are two sheets provided for numbers 1 and 10-19. One set has the “fancy” one and the other set has a normal one. I teach my kinders both and let them pick which one they want to write. Your school system may prefer one over the other, and with this packet you have both options.
If you are under an evaluation model that requires partner/group work, the box with the different numbers works great for my kinders. They work with a partner and find the number on their friend’s paper. The friend circles it and then returns the favor. Then they finish the rest independently while I walk around to question and monitor progress.
Look to the right to see a jpeg of 2 of the worksheets.
For more ideas and freebies check out my blog - Lovin’ Kindergarten
Thanks for looking! |
Astronauts aboard the International Space Station have manufactured their first tool using the 3D printer on board the station. This is another step in the ongoing process of testing and using additive manufacturing in space. The ability to build tools and replacement parts at the station is something NASA has been pursuing keenly.
The first tool printed was a simple wrench. This may not sound like ground-breaking stuff, unless you’ve ever been in the middle of a project only to find you’re missing a simple tool. A missing tool can stop any project in its tracks, and change everybody’s plans.
The benefits of manufacturing needed items in space are obvious. Up until now, every single item needed on the ISS had to be sent up via re-supply ship. That’s not a quick turnaround. Now, if a tool is lost or destroyed during normal use, a replacement can be quickly manufactured on-site.
This isn’t the first item to be printed at the station. The first one was printed back in November 2014. That item was a replacement part for the printer itself. This was important because it showed that the machine can be used to keep itself running. This reliability is key if astronauts are going to be able to rely on the printer for manufacturing critical replacements for components and spare parts.
Niki Werkheiser, the project manager for the ISS 3D printer, said in a NASA YouTube video, “Since the inception of the human space program, we have been completely dependent on launching every single thing we need from Earth to space … I think we’re making history for the first time ever being able to make what we need when we need it in space.”
The 3D printer, which is more accurately called an Additive Manufacturing Facility (AMF) was built by a company called Made In Space. The one that was used to make the first tool is actually a different one than was used to make the replacement part for the printer itself. The first one was part of a test in 2014 to see how 3D printing would work in microgravity. It printed several items which were returned to Earth for testing. Those tests went well, which led to the second one being sent to the station.
This second machine, which was used to create the wrench, is a much more fully featured, commercial 3D printer. According to Made In Space, this newer AMF “can be accessed by any Earth-bound customer for job-specific work, like a machine shop in space. Example use cases include a medical device company prototyping space-optimized designs, or a satellite manufacturer testing new deployable geometries, or creating tools for ISS crew members.”
This is exciting news for we space enthusiasts, but even more exciting for a certain engineering student from the University of Alabama. The student, Robert Hillan, submitted a tool design to a NASA competition called the Future Engineers Space Tool design competition. The challenge was to design a tool that could be used successfully by astronauts in space. The catch was that the tool design had to upload to the ISS electronically and be printed by the AMF on the station.
In January, Hillan was announced as the winner. His design? The Multipurpose Precision Maintenance Tool, a kind of multi-tool that handy people are familiar with. The tool allows astronauts to tighten and loosen different sizes of nuts and bolts, and to strip wires.
NASA astronaut Tim Kopra, who is currently aboard the ISS, praised both Hillan and the 3D printing technology itself. “When you have a problem, it will drive specific requirements and solutions. 3-D printing allows you to do a quick design to meet those requirements. That’s the beauty of this tool and this technology. You can produce something you hadn’t anticipated and do it on short notice.”
The immediate and practical benefits of AMF in space are obvious and concrete. But like a lot of space technologies, it is part of a larger picture, too.
Werkheiser, NASA’s project manager for the ISS 3D printer, said “If a printer is critical for explorers, it must be capable of replicating its own parts, so that it can keep working during longer journeys to places like Mars or an asteroid. Ultimately, one day, a printer may even be able to print another printer.”
So there we have it. A journey to Mars and printers replicating themselves. Bring it on. |
There are a million things that we do every day without thinking. Brushing our teeth, drying our hair after a shower, and unlocking our phone screen so we can check our messages are all part of our routine. But what takes place in the brain as we learn a new habit?
What's something you've learned to do without thinking? It might be locking the door behind you as you leave, which could lead to some panic later as you wonder if you actually remembered to do it.
It might be driving to work. Have you ever had that uncanny experience of finding yourself at your destination without fully remembering how you got there? I certainly have, and it's all thanks to the brain's trusty autopilot mode.
Habits drive our lives — so much so that sometimes we might want to break the habit, as the saying goes, and experience something new.
But habits are a useful tool; when we do something enough times, we become effortlessly good at it, which is perhaps why Aristotle reportedly believed that "excellence [...] is not an act but a habit."
So, what does habit formation look like in the brain? How do our neural networks behave as we learn something and consolidate it into an effortless behavior through repetition?
These are the questions that Ann Graybiel and her colleagues — from the Massachusetts Institute of Technology in Chestnut Hill — set out to answer in a recent study, the findings of which are published in the journal Current Biology.
'Bookending' Neural Signals
Although a habitual action seems so simple and effortless, it actually typically involves a string of small necessary movements — such as unlocking the car, getting into it, adjusting the mirrors, securing the seatbelt, and so on.
This complex set of movements that amount to one routine action that we perform unconsciously is called "chunking," and although we know that it exists, exactly how "chunks" form and stabilize has remained mysterious so far.
The new study now suggests that some brain cells are tasked with "bookending" the chunks that correspond to habitual actions.
Working with mice, the team noted that the patterns of signals transmitted between neurons in the striatum shifted as the animals were taught a new sequence of actions — turning in one direction at a sound signal while navigating a maze — which then evolved into a habit.
At the beginning of the learning process, the neurons in the mice's striata emitted a continuous string of signals, the scientists saw, but as the mice's actions started to consolidate into habitual movements, the neurons fired their distinctive signals only at the beginning and at the end of the task performed.
When a signaling pattern takes root, explain Graybiel and colleagues, a habit has taken shape and breaking it becomes a difficult endeavor.
Brain Patterns that Indicate Habits
Although edifying, Graybiel's previous efforts did not establish for certain that the signaling patterns observed in the brain were related to habit formation. They could simply have been motor commands that regulated the mice's running behavior.
In order to confirm the idea that the patterns corresponded to the chunking associated with habit formation, Graybiel and her current team devised a different set of experiments. In the new study, they set out to teach rats to press two levers repeatedly in a specific order.
The researchers used reward conditioning to motivate the animals. If they pressed the levers in the correct sequence they were offered chocolate milk.
To ensure that there would be no doubt regarding the solidity of the experiment's results — and that they would be able to identify brain activity patterns related to habit formation rather than anything else — the scientists taught the rats different sequences.
Sure enough, once the animals had learned to press the levers in the sequence established by their trainers, the team noticed the same "bookending" pattern in the striatum: sets of neurons would fire signals at the beginning and end of a task, thus delimitating a "chunk."
"I think," explains Graybiel, "this more or less proves that the development of bracketing patterns serves to package up a behavior that the brain — and the animals — consider valuable and worth keeping in their repertoire."
"It really is a high-level signal that helps to release that habit, and we think the end signal says the routine has been done."
Finally, the team also noted the formation of another — complementary — pattern of activity in a group of inhibitory brain cells called "interneurons" in the striatum.
"The interneurons," explains lead study author Nuné Martiros, of Harvard University in Cambridge, MA, "were activated during the time when the rats were in the middle of performing the learned sequence."
She adds that the interneurons "could possibly be preventing the principal neurons from initiating another routine until the current one was finished."
"The discovery of this opposite activity by the interneurons," Martiros concludes, "also gets us one step closer to understanding how brain circuits can actually produce this pattern of activity."
Would like to find out more about this article? Please visit the following link: |
Key Stage 1
The teaching focuses on spoken with an introduction to written language and lays the foundations for further foreign language teaching. It enables pupils to understand and communicate ideas, facts and feelings in speech, focused on familiar and routine matters, using their knowledge of phonology, grammatical structures and vocabulary. Its focus is practical communication.
Pupils will be taught to:
Begin to have a basic understanding of grammar appropriate to the language being studied, including (where relevant): feminine, masculine and neuter forms and the conjugation of high-frequency verbs; key features and patterns of the language; how to apply these, for instance, to build sentences; and how these differ from or are similar to English. |
When you’re considering of studying to code, the language you determine to choose up first has loads to do with what you’re trying to study, what you need to do with the skill, and the place you wish to eventually go from there. These ideas are represented as a group of the simplest elements obtainable (called primitives ). 63 Programming is the process by which programmers combine these primitives to compose new applications, or adapt present ones to new uses or a altering setting.
If your aim in studying find out how to program is to increase your job alternatives and you are not going to be dissuaded by how arduous folks say a language goes to be, listed below are some pointers that will help you figure out what language you need to be taught.
Some languages are outlined by a specification doc (for instance, the C programming language is specified by an ISO Normal) whereas different languages (resembling Perl ) have a dominant implementation that’s handled as a reference Some languages have both, with the fundamental language outlined by a regular and extensions taken from the dominant implementation being frequent. |
A”Microcell” is a low-powered cellular radio access node that has a range of 10 meters to a few kilometres and is used to increase a cellular network’s capacity.
“Microcell” is the name industry and government in Canada initially used for what is called a “Small Cell” in the United States. Recently, Canada has adopted the term “Small Cell” but alas – this site and its acronym had already been born! Technically “Small Cell” is the umbrella term for femtocells, microcells and picocells.
The term “Small Cell” is regularly misused, even by industry. Accurately speaking, a cell is not an antenna, but the effective area/range of an antenna’s radiation. Telecoms plan to install hundreds of thousands of microcells, often several to a block, to transmit the millimetre wave high frequencies of 5G. Millimetre waves have a short range and don’t propagate well, which is why industry needs to install a lot of small cells for 5G to “work.”
5G will not replace 3G and 4G, but will join them. Because 5G is a mix of frequencies, 5G antennas are also being placed on macrocells (“Large Cell” towers) to transmit low to mid band 5G. Despite what we are told, when it comes to antennas and transmitters, distance is not our friend. The farther the source, the greater the reflection and refraction of signals. The physics of artificial electromagnetic radiation show that any antennas is too close. Exposing all living thing to toxic wireless technology is damaging our ecosystems and causing serious biological harm.
A Verizon 5G Data Box arrives without notice on a Houston lawn, 2021: |
Ectrodactyly is an uncommon inborn malformation of the hand where the intermediate digit is missing, and the hand is cleft where the metacarpal of the finger should be. This schism gives the hands the show of lobster claws. Ectrodactyly is too known as Karsch neugebauer syndrome. It has been called lobster-claw syndrome because the hands of those affected can appear claw-like. This rare syndrome has many forms and one of the most common of this forms called Type I is associated with a specific region of human chromosome 7 that contains two homeobox genes, DLX5 and DLX6. These genes are similar to a gene in insects called distal-less that controls limb development. When this gene is defective in the fruit fly the distal part of the insect limb is missing. Ectrodactyly may be present alone, or may be part of a number of birth defects. Hand deformation alone is unlikely to affect health.
Ectrodactyly does get several types, and all of them are hereditary. However, heritage of the circumstance happens seldom. Those who have ectrodactyly or get children with the circumstance are at increased danger for passing it onto subsequent children. In this circumstance, the intermediate finger or intermediate toe is missing. As easily, the two fingers or toes to the right and left of the missing finger are fused jointly. This has frequently led to ectrodactyly being called lobster claw hands, or lobster claw syndrome, because the hand deformity does hold some similarity in show to the claws of a lobster. It often occurs in both the hands and the feet. In olden days, a person diagnosed with ectrodactyly usually ended upwards joining a circus sideshow, and go his deformities to his favour. Geneticists establish the circumstance to happen in both humans and worm populations immediately because of the mutated chromosome.
The diagnosis of ectrodactyly syndrome can be complex because of the overlap of symptoms with other ectodermal dysplasia syndromes. Currently there are several treatments, which can normalize the appearance of the hands, yet they will not function precisely the same way as regularly formed hands. The prognosis for most individuals with ectrodactyly syndrome is very good. Some people with ectrodactyly use prosthetic hands to avoid the rude stares of others. Ectrodactyly is an inherited circumstance which can be treated surgically to better role and show. Early physical and occupational therapy can help those with ectrodactyly adapt, and learn to write, pick things up, and be fully functional. Genetic findings could have great implications in clinical diagnosis and treatment of not only ectrodactyly, but also many other related syndromes. |
Speech choirs are performance groups that recite speeches in unison, often with elements of choreography and costuming to help bring the speech to life. Much like musical choirs, dynamic -- volume -- range, expression and accurate coordination of syllables are all important for a successful performance. Speech choirs date back to ancient Greece, where they were an integral part of most plays.
A speech choir is typically the same size as a singing choir, having anywhere from 12 to 100 members or more. However, most schools and competitions feature choirs of 25 to 40 members. The choirs typically are divided into groups based on the members' natural speaking voices. Females with naturally high voices or young females comprise the "light" group, females with deeper voices and young males or males with high voices comprise the "medium" group and males with deep voices comprise the "dark" group.
Selections are typically poems or poetic passages, such as from Greek dramas or Shakespeare's plays. The conductor gives some thought to the passage, breaking it into parts that, for example, only the "light" voices recite or strong passages that are voiced by all the members. Facial expressions and intonation are also carefully planned, so all the members can practice in unison. Solo parts for specific members can add dramatic effect.
Choreography of movement is not a necessary component for a speech choir. Many successful competition choirs recite their pieces while standing in place with their hands at their sides, attention directed solely at the conductor. However, in the Greek tradition, speech choirs marched from side to side in alternating patterns called "strophe" and "antistrophe." Thus, movement is part of the rich history of speech choir, and some conductors choose to choreograph elaborate movement to accompany their pieces.
As with any other performance art, thought should be put into how the speech choir will dress. Costumes can be as simple as matching outfits or robes, such as a vocal choir would wear, or elaborate theatrical garb. Plain uniforms allow the audience to concentrate on facial expressions and allow the choir to recite several very different pieces in one performance. However, a themed costume for a single piece can highlight its meaning or help to differentiate between voice groups. |
An actively feeding black hole surrounds itself with a disk of hot gas and dust that flickers like a campfire. Astronomers have now found that monitoring changes in those flickers can reveal something that is notoriously hard to measure: the behemoth’s heft.
“It’s a new way to weigh black holes,” says astronomer Colin Burke of the University of Illinois at Urbana-Champaign. What’s more, the method could be used on any astrophysical object with an accretion disk, and may even help find elusive midsize black holes, researchers report in the Aug. 13 Science.
It’s not easy to measure a black hole’s mass. For one thing, the dark behemoths are notoriously difficult to see. But sometimes black holes reveal themselves when they eat. As gas and dust falls into a black hole, the material organizes into a disk that is heated to white-hot temperatures and can, in some cases, outshine all the stars in the galaxy combined.
Measuring the black hole’s diameter can reveal its mass using Einstein’s general theory of relativity. But only the globe-spanning Event Horizon Telescope has made this sort of measurement, and for only one black hole so far (SN: 4/22/19). Other black holes have been weighed via observations of their influence on the material around them, but that takes a lot of data and doesn’t work for every supermassive black hole.
So, looking for another way, Burke and colleagues turned to accretion disks. Astronomers aren’t sure how black holes’ disks flicker, but it seems like small changes in light combine to brighten or dim the entire disk over a given span of time. Previous research had hinted that the time it takes a disk to fade, brighten and fade again is related to the mass of its central black hole. But those claims were controversial, and didn’t cover the full range of black hole masses, Burke says.
So he and colleagues assembled observations of 67 actively feeding black holes with known masses. The behemoths spanned sizes from 10,000 to 10 billion solar masses. For the smallest of these black holes, the flickers changed on timescales of hours to weeks. Supermassive black holes with masses between 100 million and 10 billion solar masses flickered more slowly, every few hundred days.
“That gives us a hint that, okay, if this relation holds for small supermassive black holes and big ones, maybe it’s sort of a universal feature,” Burke says.
Out of curiosity, the team also looked at white dwarfs, the compact corpses of stars like the sun, which are some of the smallest objects to sport consistent accretion disks. Those white dwarfs followed the same relationship between flicker speed and mass.
The analyzed black holes didn’t cover the entire possible range of masses. Known black holes that are from about 100 to 100,000 times the mass of the sun are rare. There are several potential candidates, but only one has been confirmed (SN: 9/2/20). In the future, the relationship between disk flickers and black hole mass could tell astronomers exactly what kind of disk flickers to look for to help bring these midsize beasts out of hiding, if they’re there to be found, Burke says.
Astrophysicist Vivienne Baldassare of Washington State University in Pullman studies black holes in dwarf galaxies, which may preserve some of the properties of ancient black holes that formed in the early universe. One of the biggest challenges in her work is measuring black hole masses. The study’s “super exciting results … will have a large impact for my research, and I expect many others as well,” she says.
The method offers a simpler way to weigh black holes than any previous technique, Burke says — but not necessarily a faster one. More massive black holes, for example, would need hundreds of days, or possibly years, of observations to reveal their masses.
Upcoming observatories are already planning to take that kind of data. The Vera C. Rubin Observatory is expected to start observing the entire sky every night beginning in 2022 or 2023 (SN: 1/10/20). Once the telescope has been running long enough, the observations needed to weigh black holes “will fall out for free” from the Rubin Observatory data, Burke says. “We’re already building it. We may as well do this.” |
The intricate process of normal eye development, which occurs during the first trimester of pregnancy, involves several genes. When these genes make mistakes (called mutations), a serious eye condition can develop. Ophthalmic genetics is a much-needed and rapidly expanding field around the world. Ethnic variety, along with a high degree of consanguinity, has resulted in a global epidemic of genetic disorders. Inherited retinal disease (IRD) is the most common cause of blindness in people in their working years. Molecular diagnosis has been sped up thanks to advances in molecular genetic approaches, such as focused gene panel analysis and the use of next-generation sequencing methodologies. Likewise, developments in ocular imaging and visual function tests have enhanced our understanding of natural history, which is critical for evaluating treatment outcomes in clinical trials of potential IRD medicines. |
The Venus’ flower basket (Euplectella aspergillum) is a deep ocean sponge with fascinating properties and an unusual symbiotic relationship with a pair of crustaceans. We call it a romantic get-away inside a sponge.
The Venus’ flower basket is classified as a glass sponge because its body is made of silica, which is chemically the same as glass. The silica fibers are woven together to make a hollow, cylindrical vase-like structure. The fibers form a fine mesh which is rigid and strong enough to survive deep underwater. The picture shows a Venus’ flower basket more than 8400 feet (2572 meters) under the ocean’s surface.
Glassy fibers thin as a human hair but more flexible and sturdier than human-made optical fibers attach the sponge to the ocean floor. The sponge forms the fibers at ocean temperatures while human-made glass fibers require high-temperature furnaces to melt the glass. Human-made fibers are brittle while the sponge’s fibers are more flexible. Scientists are studying these sponges to find ways to make better fiber-optic cables.
We think it’s amazing that the Venus’ flower basket lights its fibers using bioluminescence to attract prey. Even more interesting to us is the symbiotic relationship these sponges have with some crustaceans called Stenopodidea. The Venus’ flower basket holds captive two of those small shrimp-like creatures, one male and one female, inside the sponge’s hollow mesh tube. The captive creatures clean the flower basket by eating the tiny organisms attracted by the sponge’s light and consume any waste the sponge leaves. The sponge provides the crustaceans with protection from predators.
As the crustaceans spawn, their offspring are small enough to escape from the basket and find their own sponge-home where they grow until they are trapped. Because a pair of crustaceans spend their lives together inside the sponge, Asian cultures sometimes use a dried Venus’ flower basket as a wedding gift to symbolize “till death do us part.”
The Venus’ flower basket and the crustaceans benefit each other by mutual cooperation, which we call symbiosis. One more thing, the bioluminescence comes from bacteria that the sponge collects. This amazing three-way partnership occurs deep under the ocean where humans have only recently explored. We think this romantic get-away inside a sponge is another evidence of Divine design, not chance mutations.
— Roland Earnst © 2019 |
Intended to assist with Y6 comprehension activities, this pack includes a set of three text extracts with questions accompanying illustrations from the following books:
- Alice in Wonderland by Lewis Carroll
- Dracula by Bram Stoker
- The Hound of the Baskervilles by Arthur Conan Doyle
This download contains one 17-page illustrated PDF featuring three different comprehension challenges, each themed around a different classic text, and a separate answer sheet. Teachers can print off the specific text and questions they require, and use the resulting worksheet to provide a quick burst of comprehension practice – ideal for morning work, short reading sessions or for sparking interest in a particular classic text.
Each challenge includes a brief 300- to 400-word extract from a classic text, alongside a range of reading-related questions that focus on the key skills of inference, information retrieval and vocabulary use.
National Curriculum English programme of study links
Pupils should be taught to participate in discussion about what is read to them, taking turns and listening to what others say, [and] explain clearly their understanding of what is read to them. |
This article originally appeared and was published on AOL.com
We may be closer than we expect to uncover the possibility of life on Mars after a new study released by NASA suggests that the Red Planet once hosted live organisms — and might still.
The study reveals that the planet’s early atmosphere had much lower carbon dioxide levels than needed to keep it warm enough for liquid water to last. But this finding is in contrast to an older theory that high atmospheric levels of carbon dioxide were responsible for heating up the planet to allow water to flow.
Evidence of lake-like features identified in the Gale Crater from NASA’s Curiosity rover suggests that Mars’ surface was once home to river deposits and lake beds – which planetary scientists trace to the presence of water billions of years ago.
But scientists also wonder how a “young sun” — shining weaker than today — could heat the planet’s atmosphere enough to allow water to remain in liquid form.
“This leads to the question of how the surface of ancient Mars, faintly heated by a young sun, was kept warm enough to allow an active hydrological cycle, without substantial amounts of a key greenhouse gas in the atmosphere,” the authors stated in the study.
So far, scientists have outlined two possibilities to explain the recent discovery: either their climate models are missing a key element that prompted Mar’s surface to heat, or, the evidence gathered from hypothesized lake-like features in the Gale Crater were not actually caused by liquid water.
Still, results from the rover’s exploration within in slopes of the planet — including mudstone, siltstone, and sandstone — suggest a lake bed was present less than 4 billion years ago during Mars’ believed wet period.
“The watery environments that once occupied the floor of Gale Crater look like they were pretty hospitable to life — not too hot, not too cold, not too acid, not too alkaline, and the water probably was not too salty,” said study lead author Thomas Bristow, a planetary scientist at NASA’s Ames Research Center in Moffett Field, California.
But with life virtually everywhere on Earth that water is present, scientists still concur that these findings raise the possibility of life on Mars.
NOW WATCH: What Do Cockroaches Do And How To Get Rid Of Them | Everything Explained |
Ampere and Ohm
Andre Marie Ampere, a French mathematician who devoted himself to the study of electricity and magnetism, was the first to explain the electrodynamic theory. A permanent memorial to Ampere is the use of his name for the unit of electric current.
George Simon Ohm, a German mathematician and physicist, was a college teacher in Cologne when in 1827 he published, "The galvanic Circuit Investigated Mathematically". His theories were coldly received by German scientists but his research was recognized in Britain and he was awarded the Copley Medal in 1841. His name has been given to the unit of electrical resistance, the ohm.
The possibility that electricity does not consist of a smooth, continuous fluid probably occurred to many scientists. Even Franklin once wrote that the "fluid" consists of "particles extremely subtile." Nevertheless, a great deal of evidence had to be accumulated before the view was accepted that electricity comes in tiny, discrete amounts, looking not at all like a fluid when viewed microscopically. James Clerk Maxwell opposed this particle theory. Toward the end of the 1800's, however, the work of Sir Joseph John THOMSON (1856-1940) and others proved the existence of the ELECTRON.
Thomson had measured the ratio of the electron's charge to its mass.
Then in 1899 he inferred a value for the electronic charge itself by observing the behavior of a cloud of tiny charged water droplets in an electric field. This observation led to Millikan's famous Oil-Drop Experiment. Robert MILLIKAN, a physicist at the University of Chicago , with the assistance of his student Harvey Fletcher, sought to measure the charge of a single electron, an ambitious goal in 1906. A tiny droplet of oil with an excess of a few electrons was formed by forcing the liquid through a device similar to a perfume atomizer. The drop was then, in effect, suspended, with an electric field attracting it up and the force of gravity pulling it down. By determining the mass of the oil drop and the value of the electric field, the charge on the drop was calculated. The result: the electron charge -e- is negative and has the value e = 1.60/10,000,000,000,000,000,000 coulombs.
This charge is so small that a single copper penny contains more than 10,000,000,000,000,000,000,000 electrons.
Bulk matter is normally neutral. The tendency is for every positive proton in an atom to be electrically balanced against a negative electron, and the sum is as close to zero as anyone has been able to measure. In 1911, Ernest RUTHERFORD proposed the nuclear ATOM. He suggested that electrons orbit a positively charged nucleus less than 1/100,000,000,000,000 meters in diameter, just as planets orbit the Sun. Rutherford also suggested that the nucleus is composed of PROTONS, each having a charge +e. This view of matter, still considered correct in many ways, established the electrical force as that which holds an atom together. After Rutherford presented his atom, the Danish physicist Niels BOHR (see the book PHOTOBIOTICS) proposed that the electrons have only certain orbits about the nucleus, that other orbits are not possible.
The 19th century saw great strides in the advancement of electrical knowledge. Almost all the systems we use today for electrical production and transmission of power were developed by then - oil-filled capacitors (line transformers next to your house), 3-phase power (220 volt), high voltage tension lines, etc. had been invented by Tesla... |
Children and fair play
Playing fair is about learning and using the rules of the game and putting them into practice – whether they’re special family rules for card or board games, or the rules at Saturday sport.
Fair play is also about learning social rules, like cooperating, taking turns, being polite, solving problems and being flexible.
Playing fair helps children enjoy the experience of playing together. It’s also an important part of getting along with others. And when children get along well with others, it gives them a sense of belonging and helps them grow and thrive.
Helping your child with fair play: tips
You can use the following tips to help children of any age learn about fair play and enjoying the game.
- Consider the age of your child: children can learn about fair play more easily when the game is suitable for their age. For example, children younger than 6-7 years find it hard to understand formal rules. Simple games that give each child a turn can work well for younger children – for example, ‘snakes and ladders’. Short waiting times can help too.
- Give your child the chance to play lots of different games: the more experience and practice the better. Try board and ball games, competitive games of skill like chess, competitive games of chance, and cooperative games like charades. Even make-believe games can help children practise taking turns.
- Find a range of playmates: it’s good for your child to play with children who are older or younger. For example, your child can look after younger children and maybe show them the rules. Older children can also be good role models for younger children.
- Go over the rules of the game: before the game starts, make sure everyone knows the rules. You might also need to gently remind children of the rules as you play.
- Introduce some social rules: these could be rules about taking turns and congratulating other people when they win.
- Encourage children to have a say in the rules: if you’re playing a game with flexible or made-up rules, ask children what the rules should be. For example, ‘If the ball goes out of bounds, what do you think should happen?’ Children who feel they’ve had a say in the rules are more likely to follow them.
- Give feedback: praise your child for sharing, taking turns and other examples of playing fair. Point out what your child did well. For example, ‘I thought it was great the way you shook hands with the other team at the end of the game’.
Children learn about fair play by watching what you say and do. Following the rules, accepting referee decisions and being a good sport yourself all set a great example for your children. You can be a good role model on the sidelines too by saying things like, ‘Better luck next time’, ‘Good try’ or ‘Well played’.
Fair play and competition
Competition can be good for children.
When children compete against each other, the game becomes a challenge and motivates children to do their best. This can improve skills, encourage discipline and focus, and make children feel good about their achievements.
Competition also increases the desire to win. And that’s when children can sometimes find it hard to play fair. Because they want to win, they might challenge rules and other players. Some might get into arguments with their team mates and even start cheating.
Competition works best when there are clear, fair and age-appropriate rules that everyone understands and agrees to follow before the game starts. It’s also good if children are all at the same skill level.
Here are some questions that can help you work out whether a competitive game will be a good experience for your child:
- Is the game suitable for your child’s age? Modify the game to suit your child’s age or let him know he can play it when he’s older.
- Does your child have an opportunity to win? Switch to a game of chance where your child has the same chance of winning as all the other players.
- Is the opponent playing fair? Sometimes you might need to step in and remind the players of the rules.
What about competitive sport? Children deal better with competition as they get older. If your child is younger and is interested in trying a sport, you could look for modified sports like Cricket Blast, Aussie Hoops basketball, NetSetGO netball, Come and Try Rugby, and Auskick football.
When children aren’t playing fair
Here are some ideas for those times when your child is finding it tough to play fair:
- Take your child out of the game and talk calmly and clearly about what you expect. Let her know what she can do to play fairly. For example, ‘The rules say that you can only have one throw each turn. It’s important that everyone follows the rules’. You can also let her know that it’s hard but important to play fairly – this might help her control her feelings.
- If your child keeps behaving the same way or if it gets worse, deal with his behaviour. You might have to take him out of the game, and talk with him later when he calms down.
- Talk with your child about her feelings of frustration and what she should do next time. Before your child plays the next game, you could try setting up some ground rules. For example, ‘If you complain about the rules, I’ll stop you from playing the game’.
- Remind your child that games are about having fun, not about winning or losing.
- If your child is boasting about winning, try praising him for his efforts in other areas – for example, for cooperating with others, sharing or being helpful.
Winning and losing
It’s not about winning or losing – it’s about how you play the game. When your child understands this, she’ll be a ‘good sport’ and have fun playing, no matter whether she wins or loses.
Winning is a great feeling, and it’s OK for your child to feel proud of being the winner. It’s also important for your child to be a good winner. This means showing sympathy and support to the losing team or player. If you can, try to discourage your child from boasting. Instead you can highlight the fun that everyone had playing the game.
Sometimes it’s hard to turn losing into a good experience for your child. But emphasising how well your child played is really important in helping him handle bad feelings. Praise your child’s efforts. For example, ‘You were great at helping the younger kids’ or ‘You followed the rules really well’.
Children – and even adults – find it easier to lose in a game of luck than in a game of skill. This is because losing a game of chance doesn’t say anything about you or your abilities. If your child is having difficulty dealing with losing, try playing games of chance first, then build up to skill-based activities.
Some games of chance include ‘snakes and ladders’ or ‘snap’.
Games of skill include Connect 4, chess and Pick-up sticks.
It’s tempting to let your child win. It can keep her interested in the game and boost her confidence. You can let young children win from time to time, especially if they’re playing against older people. But letting your child win all the time can make it harder for her to learn that she won’t always win in the real world. It might make real winning less satisfying. |
Digital radiography (digital x-ray) is the latest technology used to take dental x-rays. This technique uses an electronic sensor (instead of x-ray film) that captures and stores the digital image on a computer. This image can be instantly viewed and enlarged helping the dentist and dental hygienist detect problems easier. Digital x-rays reduce radiation 80-90% compared to the already low exposure of traditional dental x-rays.
Dental x-rays are essential, preventative, diagnostic tools that provide valuable information not visible during a regular dental exam. Dentists and dental hygienists use this information to safely and accurately detect hidden dental abnormalities and complete an accurate treatment plan. Without x-rays, problem areas may go undetected.
Dental x-rays may reveal:
Abscesses or cysts.
Cancerous and non-cancerous tumors.
Decay between the teeth.
Poor tooth and root positions.
Problems inside a tooth or below the gum line.
Detecting and treating dental problems at an early stage may save you time, money, unnecessary discomfort, and your teeth!
Are dental x-rays safe?
We are all exposed to natural radiation in our environment. Digital x-rays produce a significantly lower level of radiation compared to traditional dental x-rays. Not only are digital x-rays better for the health and safety of the patient, they are faster and more comfortable to take, which reduces your time in the dental office. Also, since the digital image is captured electronically, there is no need to develop the x-rays, thus eliminating the disposal of harmful waste and chemicals into the environment.
Even though digital x-rays produce a low level of radiation and are considered very safe, dentists still take necessary precautions to limit the patient’s exposure to radiation. These precautions include only taking those x-rays that are necessary, and using lead apron shields to protect the body.
How often should dental x-rays be taken?
The need for dental x-rays depends on each patient’s individual dental health needs. Your dentist and dental hygienist will recommend necessary x-rays based upon the review of your medical and dental history, a dental exam, signs and symptoms, your age, and risk of disease.
A full mouth series of dental x-rays is recommended for new patients. A full series is usually good for three to five years. Bite-wing x-rays (x-rays of top and bottom teeth biting together) are taken at recall (check-up) visits and are recommended once or twice a year to detect new dental problems. |
Photo: National Mango Board
As low fruit and vegetable consumption continues to contribute to diet-related chronic diseases like diabetes and heart disease, two new research studies find regular mango consumption may improve diets and help manage key risk factors that contribute to chronic disease.
Specifically, these new studies report findings in two areas: 1) mango consumption is associated with better overall diet quality and intake of nutrients that many children and adults lack at optimum levels, and 2) snacking on mangos may improve glucose control and reduce inflammation in contrast to other sweet snacks. With mangos consumed widely in global cuisines and 58% of Americans reporting snacking at least once a day in 20211, this new research provides added evidence that regularly consuming mangos may have health advantages and be relevant to cultural dietary preferences and current eating patterns.
A recent observational study found positive outcomes in nutrient intakes, diet quality, and weight-related health outcomes in individuals who consume mangos versus those who do not2. The study, published in Nutrients in January 2022, used United States National Health and Nutrition Examination Survey (NHANES) 2001-2018 data to compare the diets and nutrient intakes of mango consumers to people who did not consume mangos.
The study showed that children who regularly ate mango had higher intakes of immune-boosting vitamins A, C, and B6, as well as fiber and potassium. Fiber and potassium are two of the four "nutrients of concern" as defined by the Dietary Guidelines for Americans, which means many Americans are not meeting recommendations for these. In adults, researchers found similar results, showing that mango consumption was associated with significantly greater daily intakes of fiber and potassium but also vitamins A, B12, C, E, and folate, a vitamin critical during pregnancy and fetal development. For both children and adults, consuming mango was associated with a reduced intake in sodium and sugar, and for adults was associated with a reduced intake of cholesterol.
"We have known for a long time that there is a strong correlation between diet and chronic disease," says Yanni Papanikolaou, researcher on the project. "This study reveals that both children and adults eating mangos tend to have significantly better diet quality overall along with higher intakes of fiber and potassium compared with those who don't eat mangos. It is also important that mango fits into many diverse cuisines. Whole fruits are under consumed, and mango can encourage fruit consumption especially among growing diverse populations."
In addition to these broad benefits of mango consumption, a separate pilot study, published in Nutrition, Metabolism & Cardiovascular Diseases in 2022 looked at mango as a snack and found that consuming whole mangos as a snack versus a control snack had better health outcomes in overweight and obese adults3. Given 97% of American adults consume snacks that contribute up to 24% of their daily energy intake4 this study sought to compare snacking on 100 calories of fresh mango daily to snacking on low-fat cookies that were equal in calories.
Twenty-seven adults participated in the study, all classified as overweight or obese based on Body Mass Index (BMI) and reported no known health conditions. Participants were given either mango or low-fat cookies as a snack while maintaining their usual diet and physical level for 12 weeks, and after a four-week wash-out period the alternating snack was given for another 12 weeks. Researchers measured the effects on glucose, insulin, lipid profiles, liver function enzymes, and inflammation. At the end of the trial period, findings indicated that mango consumption improved glycemic control (an individual's ability to manage blood glucose levels, an important factor in preventing and managing diabetes) and reduced inflammation.
Results showed there was no drop in blood glucose when participants snacked on low-fat cookies. However, when snacking on mangos there was a statiscally significant (p= 0.004) decrease in blood glucose levels at four weeks and again at 12 weeks, even though there was twice as much sugar, naturally occurring, in the mangos compared to the cookies. Researchers also observed statistically significant improvements to inflammation markers, total anti-oxidant capacity (TAC) and C-reactive protein (CRP), when snacking on mangos. TAC is a measurement of overall antioxidant capacity, or how well foods can prevent oxidation in cells. CRP is biomarker used to measure inflammation in the body. The research suggest that the antioxidants abundant in mangos offered more protection against inflammation compared to the cookies.
"The findings of this study show that antioxidants, fiber, and polyphenols abundant in mango may help to offset sugar consumption and aid in glucose control. Antioxidants may also offer protection against inflammation" says Dr. Mee Young Hong, lead investigator on the study and Professor in the School of Exercise and Nutritional Sciences at San Diego State University. "Further research is needed but the initial findings are encouraging for people who enjoy sweet snacks."
Some limitations in this study include sample size, using only one dose of mango, and measuring effects on participants without any pre-existing conditions. Further research should explore optimal dose of mango and examine the long-term effects of mango consumption on those with metabolic conditions. It would also be of benefit to compare mango to a fiber-matched control snack to distinguish the effects of fiber versus the bioactive compounds in mangos.
With only 99 calories and over 20 different vitamins and minerals, a 1 cup serving of mango is nutrient-dense, making it a superfood. Because mangos are widely consumed in cultures around the world and United States, research into their health benefits contributes to a better understanding of their place in a healthy diet.
Both studies were supported by funds from the National Mango Board. |
The Canvas widget is one of the versatile widgets in Tkinter which is used to create illustrations, draw shapes, arcs, images, and other complex layouts in an application. To create a Canvas widget, you'll need to create a constructor of canvas(root, **options).
You can use the factory functions to create text, images, arcs and define other shapes in the canvas. In some cases, if you want to create another canvas using the same canvas to keep the application workflow consistent, then you can create a button to call an event that creates another canvas.
To understand this, let us create a canvas and a button to open another canvas to update the primary canvas widget.
# Import required libraries from tkinter import * from tkinter import ttk # Create an instance of tkinter window win = Tk() win.geometry("700x350") # Create an instance of style class style=ttk.Style(win) def open_new_win(): top=Toplevel(win) canvas1=Canvas(canvas, height=180, width=100, bg="#aaaffe") canvas1.pack() Label(canvas1, text="You can modify this text", font='Helvetica 18 bold').pack() # Create a canvas widget canvas=Canvas(win, height=400, width=300) canvas.pack() # Create a button widget button=ttk.Button(canvas, text="Open Window", command=open_new_win) button.pack(pady=30) win.mainloop()
Running the above code will display a window with a button to open another canvas window.
When you click the button, it will display a message on the primary canvas window. |
To be marginalised is to be forced to occupy the sides or fringes and not to be at the centre of things. In the social environment too, groups of people or communities are being excluded. Reasons for marginalisation-different languages, follow different customs, belong to different religious groups from the majority community, they are poor, considered to be of ‘low’ social status and viewed as being less human than others. Chapter 7 of CBSE Class 8 Civics will help the students to understand these concepts well. The best source to revise the subject will include the CBSE Notes Class 8 Civics Chapter 7-Understanding Marginalisation.
Marginalised groups are viewed with hostility and fear. This sense of difference and exclusion leads the communities to not have access to resources and opportunities and unable to assert their rights, thus leading them to experience a sense of disadvantage and powerlessness vis-a-vis more powerful and dominant sections of society who own land, are wealthy, better educated and politically powerful. Thus, marginalisation is seldom experienced in one sphere. Economic, social, cultural and political factors work together to make certain groups in society feel marginalised.
Students can download the PDF Version of the CBSE Class 8 Notes from Chapter 7 of Civics from the link below:
Who Are Adivasis?
Tribals are also referred to as Adivasis. Adivasis–literally means ‘original inhabitants’, communities who lived and continue to live, in close association with forests. About 8% of India’s population is Adivasi and most of the country’s mining and industrial centres are located in Adivasi areas like Jamshedpur, Rourkela, Bokaro and Bhilai, among others. Not a homogeneous population, there are over 500 various Adivasi groups in India. They are numerous in states like Chhattisgarh, Jharkhand, Madhya Pradesh, Odisha, Gujarat, Maharashtra, Rajasthan, Andhra Pradesh, West Bengal and in the north-eastern states of Arunachal Pradesh, Assam, Manipur, Meghalaya, Mizoram, Nagaland and Tripura. 60 different tribal groups in Odisha. They are distinctive because there is often very little hierarchy among them and this makes them radically different from communities organised around principles of jati-varna (caste) or those that were ruled by kings.
Adivasis practise a range of tribal religions- different from Islam, Hinduism and Christianity- worship of ancestors, village and nature spirits, the last associated with and residing in various sites in the landscape – ‘mountain-spirits’, ‘river-spirits’, ‘animal-spirits’, etc. The village spirits-worshipped at specific sacred groves within the village boundary, the ancestral ones- worshipped at home. Adivasis-influenced by different surrounding religions like Shakta, Buddhist, Vaishnav, Bhakti and Christianity. Adivasi religions-influenced dominant religions of the empires around them, for example, the Jagannath cult of Odisha and Shakti and Tantric traditions in Bengal and Assam. During the 19th century, substantial numbers of Adivasis converted to Christianity, which has emerged as a very important religion in modern Adivasi history. Adivasis have their own languages (most of them radically different from and possibly as old as Sanskrit), which have often deeply influenced the formation of ‘mainstream’ Indian languages, like Bengali. Santhali has the largest number of speakers and has a significant body of publications including magazines on the internet or in e-zines.
Adivasis and Stereotyping
Adivasis are portrayed in very stereotypical ways – in colourful costumes, headgear and through their dancing. Besides this, we seem to know very little about the realities of their lives. This wrongly leads to people believing they are exotic, primitive and backward.
Adivasis and Development
- Forests covered a major part of our country until the 19th century
- Adivasis had a deep knowledge of, access to, as well as control over most of these vast tracts at least till the middle of the nineteenth century. They were not ruled by large states and empires. Instead, often empires heavily depended on Adivasis for the crucial access to forest resources.
- In the pre-colonial world, they were traditionally ranged hunter-gatherers and nomads and lived by shifting agriculture and also cultivating in one place. For the past 200 years, Adivasis have been increasingly forced – through economic changes, forest policies and political force applied by the State and private industry – to migrate to lives as workers in plantations, at construction sites, in industries and as domestic workers. For the first time in history, they do not control or have much direct access to the forest territories.
- From the 1830s onwards, Adivasis from Jharkhand and adjoining areas moved in very large numbers to various plantations in India and the world – Mauritius, the Caribbean and even Australia. India’s tea industry became possible with their labour in Assam. Today, there are 70 lakh Adivasis in Assam alone. For example, in the 19th century alone 5 lakh Adivasis had perished in these migrations.
Forestlands-cleared for timber and to get land for agriculture and industry. Adivasis-lived in areas that are rich in minerals and other natural resources, which were taken over for mining and other large industrial projects. Powerful forces collude to take over tribal land forcefully and procedures are not followed.
According to official figures, over 50% of persons displaced due to mines and mining projects are tribals. Another recent survey report by organisations working among Adivasis shows that 79% of the persons displaced from the states of Andhra Pradesh, Chhattisgarh, Odisha and Jharkhand are tribals. Huge tracts of their lands have also gone under the waters of hundreds of dams that have been built in independent India.
In the Northeast, their lands remain highly militarised. India has 104 national parks covering 40,501 sq km and 543 wildlife sanctuaries covering 1,18,918 sq km. These are areas where tribals originally lived but were evicted from. When they continue to stay in these forests, they are termed, encroachers. Losing their lands and access to the forest means that tribals lose their main sources of livelihood and food. Having gradually lost access to their traditional homelands, many Adivasis have migrated to cities in search of work where they are employed for very low wages in local industries or at building or construction sites.
Poverty and deprivation, situations Adivasis are caught in. 45% of tribal groups in rural areas and 35% in urban areas live below the poverty line leading to deprivation in other areas- malnourished tribal children- low Literacy rates-When Adivasis are displaced from their lands, they lose much more than a source of income-lose their traditions and customs – a way of living and being. As you have read, there exists an interconnectedness between the economic and social dimensions of tribal life. Destruction in one sphere naturally impacts the other. Often this process of dispossession and displacement can be painful and violent.
Minorities and Marginalisation
The Constitution provides safeguards to religious and linguistic minorities as part of our Fundamental Rights. Why have these minority groups been provided with these safeguards? The minority refers to communities that are numerically small in relation to the rest of the population. This concept goes well beyond numbers encompassing issues of power, access to resources with social and cultural dimensions.
Culture of majority influencing the way in which society and government express themselves- size is a disadvantage and result in the marginalisation of the relatively smaller communities- hence, safeguards protect minority communities against being culturally dominated by the majority-also protect them against any discrimination and disadvantage-Communities that are small in number relative to the rest of society may feel insecure about their lives, assets and well-being, which may get accentuated if the relations between the minority and majority communities are fraught-The Constitution provides these safeguards because it is committed to protecting India’s cultural diversity and promoting equality as well as justice-the judiciary plays a crucial role in upholding the law and enforcing Fundamental Rights-every citizen of India can approach the courts if they believe that their Fundamental Rights have been violated.
Muslims and Marginalisation
14.2% of Indian Population (2011 Census)-Muslims are considered as a marginalised community as they have been deprived of the benefits of the socio-economic development over the years. Muslims were lagging behind in terms of various development indicators- so the government set up a high-level committee in 2005- chaired by Justice Rajindar Sachar-The committee examined the social, economic and educational status of the Muslim community in India- The report discusses in detail the marginalisation of this community-suggests that on a range of social, economic and educational indicators the situation of the Muslim community is comparable to that of other marginalised communities like Scheduled Castes and Scheduled Tribes.
Economic and social marginalisation experienced by Muslims has other dimensions-Like other minorities, distinct Muslim customs and practices apart from what is seen as the mainstream. Some may wear a burqa, sport a long beard, wear a fez, leading for ways to identify all Muslims- they tend to be identified differently and some people think they are not like the ‘rest of us’-thus causing them to be treated unfairly and discriminated against -This social marginalisation of Muslims has led to them migrating from places where they have lived, often leading to the ghettoisation of the community-Sometimes, this prejudice leads to hatred and violence.
Marginalisation, a complex phenomenon requires a variety of strategies, measures and safeguards to redress this situation. All of us have a stake in protecting the rights defined in the Constitution and the Laws and Policies framed to realise these rights. Without these, we will never be able to protect the diversity that makes our country unique nor realise the State’s commitment to promoting equality for all. |
More and more, children and adolescents are growing up online. Gone are the days when playing outside was the most popular form of entertainment. Time is now monopolized by tablets, smartphones and gaming consoles, with an increasing number of kids fixated on virtual worlds not knowing the dangers of virtual reality.
Virtual reality is a computer-generated immersive environment that can be similar to the real world. But most times VR is fantastical, creating an experience not possible in physical reality. With virtual reality equipment—usually headsets—the user can see around the artificial world, move around it, and interact with virtual features or items.
In a technology driven world, virtual reality games and devices are becoming common fixtures in many American homes today. Without extensive studies on how it will affect the physical and emotional well-being of children, parents and educators struggle with the possible effects of the new technology and how it affects their children.
While many parents have high hopes that the emerging technology will have educational benefits due to its highly engaging nature, a study conducted by Common Sense Media shows that 60% of parents worry about the health effects of virtual reality.
Early research on the impact of virtual reality airs on the side of caution regarding its use by young children. Stanford researchers partnered with Common Sense Media to perform extensive research on children’s media use, examining the impact of VR on children. The study found that virtual reality is likely to have a powerful impact on young children who may have a hard time separating VR fantasies from reality. On the positive side, it was found out that the vividness of virtual reality could be used as an effective teaching tool.
Stanford professor Jeremy Bailenson, one of the authors of the report, believed that since virtual reality is a very compelling medium, people can learn from it.
Gretchen Walkier, vice president of learning at San Jose’s Tech Museum, is of the opinion that technology can help children experiment. She believes that VR can give children a full body experience by letting them design an environment and have them walk through a 3-D model of it, making it a powerful tool for visual learning.
About 62% of parents who participated in the study believe that virtual reality can provide educational experiences but only 22% of them reported that their children actually use virtual reality for learning.
Despite the perceived educational benefits of virtual reality, and the fact that 70% of U.S. children are interested in VR, parents are having a hard time adopting the technology. Only 21% of households with children have a VR device and 13% have plans to get one.
Is it Time to Pause?
Parents are not comfortable with the idea of kids playing with something that could pose harm. Warnings about dizziness, nausea, headaches, and bumping into things are clearly not good for children and adolescents.
As it is, units are sold with warnings requiring a large space to move around. The VR units come with chaperone system for protection. Even for adults, this may be inadequate protection as the user is blind to the real world while he plays in VR. Without clearing space, the user is vulnerable to falling, tripping, hitting the head on something, and injuries to the arms and legs. It is necessary that someone is watching the user when he wears the VR headset.
Professor Bailenson who founded the Stanford’s Virtual Human Interaction Lab acknowledged that the long terms effects of virtual reality on the developing brains are not known yet but the short impacts could include dizziness, eyestrain, and headache.
Of particular concern is the increase in nearsightedness. A study showed that 40% of people in 2000 had myopia compared to only 25% in the 1970s. The use of tablets, laptops, cell phones, and now the VR devices contribute to the lengthening of the eye and potentially causes myopia. Motion sickness is another concern. There have been cases where using 3D glasses in watching 3D movies can trigger vertigo. Inadequate device resolution and processing power can also lead to nausea while using the VR devices.
Technically, VR is tricking the brain to think an object is far away, even if the screen is near the eyes. Therefore, there is a disconnect between the way the eye focuses and the perceived distance to the object. Essentially, it tricks the brain into thinking that the object is far. Scientists don’t know what effect this will have on the brain or the eyes.
When using cellphones, the user looks at the device for a minute or two and then gazes away from it. There is no long-term continuous use or staring at the device. In contrast, the user stares at the VR device for minutes at a time. Most devices advise that the user takes a 15-minute break from wearing the VR headset for every 30 minute of use. However, this advice does not have any scientific basis. Marientina Gotsis, Associate Professor of Research at the University of Southern California’s Interactive Media and Games Division says that the long term use may have cumulative effects which are still undiscovered. She says that eyestrain is a signal that something may be wrong and advises the user to stop playing.
Virtual Reality Explosion
Despite the reasons for pause, the International Data Corporation made a worldwide forecast that virtual reality and augmented reality spending will accelerate over the next several years, reaching a total of $143.3 billion in 2020. The current figures stand at $13.9 billion in 2017 compared to only $6.1 billion spent in 2016. The stats on the number of users below the age of 18 are inadequate. Manufacturers and vendors only count the number of units sold for both VR units and titles. Drilling down the information to the number of actual users and their age demographics would require an in-depth survey. However, there is evidence to show that there are a sizable number of users below the age of 18.
One important selling point for children’s use of VR technology is that it is immersive. As a teaching tool, children may start to believe that the experience truly happened. A 2017 study by Bailenson showed that virtual reality characters have a stronger effect on children than TV or computer game characters. A study of kids 4-6 years old playing a VR game found that they interacted with the in-game character like a friend, which was more interactive than the same character in the computer version of the game.
VR technology is here to stay. Its growing popularity means it may soon be integrated into how students are taught and learn. And while this innovative tech is certainly making a bold impact in gaming and entertainment, our society will certainly continue to watch how it affects brain development and overall good health. |
How do we determine which habitat areas are the most critical to protect?
Critical habitat includes specific areas within a species’ current range that have “physical or biological features essential to the conservation of the species,” as well as areas outside the species’ current range upon a determination “that such areas are essential for the conservation of the species.” In other words, …
What is the meaning of critical habitat?
Finally, we found that many endangered species, particularly plants and invertebrates, experience high levels of noise pollution in their critical habitat – geographic areas that are essential for their survival. …
What is a critical habitat and what does protecting it mean?
Critical habitat is a term defined and used in the Endangered Species Act. It is specific geographic areas that contain features essential to the conservation of an endangered or threatened species and that may require special management and protection.
Why are some habitat known as critical habitat?
All species have particular requirements for their ecological habitat. These specific needs are known as critical habitat, and they must be satisfied if the species is to survive. Salt licks are another critical habitat feature for many large species of mammalian herbivores. …
What is a critical habitat quizlet?
critical habitat. environment necessary to the survival of an endangered or threatened species.
Why does the critical habitat need to be protected?
Critical habitat is designed to protect the essential physical and biological features of a landscape and essential areas in the appropriate quantity and spatial arrangement that a species needs to survive and reproduce and ultimately be conserved.
What is critical habitat NSW?
(1) Critical habitat is habitat that is essential for the conservation of a viable population of protected wildlife or community of native wildlife, whether or not special management considerations and protection are required.
WHO declares critical wildlife habitat?
Based on the scientific determination of Critical Wildlife Habitats and open consultations with the forest rights holders, the Expert Committee shall submit a proposal for the Critical Wildlife Habitat to the Chief Wildlife Warden and the proposal shall be accompanied by a map, preferably on 1:50,000 scale and a …
What is a critical habitat in Florida?
Critical habitat is defined as areas that may require special management considerations or protection that are within the geographic range of a species and that contain physical or biological features essential to the conservation the species.
What is the difference between threatened and endangered species?
Endangered species are those plants and animals that have become so rare they are in danger of becoming extinct. Threatened species are plants and animals that are likely to become endangered within the foreseeable future throughout all or a significant portion of its range.
Which is critical tiger habitat Upsc?
The correct answer is Nagarjunsagar-Srisailam. The reserve spreads over five districts, Kurnool District, Prakasam District, Guntur District, Nalgonda District, and Mahbubnagar district. The total area of the tiger reserve is 3,728 km2 (1,439 sq mi).
How can we protect our habitat?
Use non-toxic, nature-based products for household cleaning, lawn, and garden care. Never spray lawn or garden chemicals on a windy or rainy day, as they will wash into the waterways. Plant only native species of trees, shrubs, and flowers to preserve the ecological balance of local habitats, such as wetlands.
Which animal is critically endangered?
|Common name||Scientific name||Conservation status ↓|
|Hawksbill Turtle||Eretmochelys imbricata||Critically Endangered|
|Javan Rhino||Rhinoceros sondaicus||Critically Endangered|
|Orangutan||Pongo abelii, Pongo pygmaeus||Critically Endangered|
|Saola||Pseudoryx nghetinhensis||Critically Endangered|
How does Habitat Protection help?
The standard of habitat protection provides an important point of focus for those outside of government, including the scientific community, to help protect areas at least until recovery plans are developed that will clarify the needs of endangered species and provide more fully for their recovery. |
- What Is a Network? What Is Networking?
- Why Build a Network?
- How Networks Are Put Together
- The Network Architecture: Combining the Physical and Logical Components
- Two Varieties of Networks: Local and Wide Area
- How the Internet Relates to Your Network
- Connecting to the Internet
- Why the Internet Matters
- Intranets, Extranets, and internets
The Network Architecture: Combining the Physical and Logical Components
When computers are connected, we must choose a network architecture, which is the combination of all the physical and logical components. The components are arranged (we hope) in such a way that they provide us with an efficient transport and storage system for our data. The network architecture we choose dictates the physical topology and the logical arrangements of the system. For example, if I say, “I’m building a Switched Ethernet network,” this statement implies the overall architecture of my future network. Let’s now examine these physical and logical components.
The Physical Network
The physical network is easy to understand because it’s usually visible. Mainly, it consists of hardware: the wiring, plugs such as computer ports, printers, mail servers, and other devices that process and store our data. The physical network also includes the important (read: vital) signals that represent the user data. Examples are voltage levels and light pulses to represent binary images of 1s and 0s—strung together in many combinations to describe our data.
I say “usually visible” because we can’t see wireless connections. Although more ethereal than copper wire connections, wireless connections are nonetheless physical, taking the form of electromagnetic radio waves.
Quite rare only a few years ago, wireless networks such as Wi-Fi are now common. If you have a broadband connection in your home, chances are good your computer is connected to your broadband hardware device with a wireless arrangement. How we explain the layout (also called a topology) of a wireless network is no different from that of a wire-based network.
Physical Layout—Network Topologies
As mentioned, the physical aspect of the network consists of the components that support the physical connection between computers. In today’s networks, four topologies are employed: (a) star, (b) ring, (c) bus, and (d) cell. They are depicted in Figure 1.1.
Figure 1.1 Network topologies: (a) Star topology, (b) Ring topology, (c) Bus topology, (d) Cellular topology
- Star—The star topology employs a central connection point, called a router, hub, bridge, or switch. The computers on the network radiate out from this point, as seen in Figure 1.1(a). The job of the central point is to switch (relay) the users’ data between user machines and perhaps other central connection points. The terms router, hub, bridge, or switch are used interchangeably by some people. Generally, the terms hub and bridge are associated with devices of a somewhat limited capacity. The term switch has historically been associated with telephone networks (with the exception of the 1970’s computer network message switches and 1980’s packet switches). The term router found its way into the industry in the 1980s and is now used more frequently than the other terms. Whatever we call these machines, they manage traffic on the network and relay this traffic back and forth between our computers.
Ring—The ring topology, shown in Figure 1.1(b), connects the computers through a wire or cable. As the data (usually called a packet) travels around the ring, each computer examines a destination address in the packet header (similar in concept to a postal envelope’s “to” address) and copies the data if the computer’s address matches its address. Otherwise, the computer simply passes the packet back onto the ring to the next computer (often called the next node). When the packet arrives at the originating node, it removes the packet from the ring by not passing it on.
The ring topology is the first example of a broadcast network: Nodes in the network receive all traffic in the network. Whether a node chooses to accept the packet depends on the destination address in the packet header.
- Bus—The bus topology is shown in Figure 1.1(c). It consists of a wire with taps along its length to which computers connect. It is also a broadcast network because all nodes receive the traffic. The sending node transmits the packet in both directions on the bus. The receiving nodes copy an image of the packet if the destination address matches the address of the node. The packet rapidly propagates through the bus, where it is then “terminated” at the two ends of the bus. As you may have surmised, packets traveling along this bus may interfere with each other if the nodes relay the packets onto the bus at about the same time. The bus topology handles this situation with a collision detection procedure. A node keeps sending until it detects its transmission has occurred without interference (by checking its own transmission).
- Cellular—The cellular topology is employed in wireless networks, an arrangement shown in Figure 1.1(d). Cellular networks use broadcast protocols; all nodes (cellular phones) are capable of receiving transmissions on a control channel from a central site. A wireless control node (called the base station) uses this common channel to direct a node to lock onto a specific (user) channel for its connection. During the ongoing connection, the cell phone is simultaneously communicating with the base station with the control link and the user link.
The Logical Network
The previous section explained the physical layout of networks, such as the star topology. In explaining how packets of user traffic are moved across these topologies, we have also explained the logical aspects of a network. Again, the logical parts of computer networks entail the invocation of software to “propel” the packets across the physical media and to receive them at the other end.
Unlike the physical network, the logical network is not visible. It uses the physical network for transport of data. We defer describing the details of the logical network here, as it is described extensively in almost every subsequent hour. |
Sex cams that accept american express internet dating woes
Yet, as Lee observes, a game "should not be regarded as a marginal activity filling in odd moments when the teacher and class have nothing better to do" (1979:3).
Games help and encourage many learners to sustain their interest and work.' ' Games also help the teacher to create contexts in which the language is useful and meaningful.They are highly motivating and entertaining, and they can give shy students more opportunity to express their opinions and feelings (Hansen 198).They also enable learners to acquire new experiences within a foreign language which are not always possible during a typical lesson.The results of this research suggest that games are used not only for mere fun, but more importantly, for the useful practice and review of language lessons, thus leading toward the goal of improving learners' communicative competence.' * Games are fun and children like to play them.Through games children experiment, discover, and interact with their environment.
One of the best ways of doing this is through games.' ' There are many advantages of using games in the classroom: 1. Games help students to make and sustain the effort of learning. Games provide language practice in the various skills- speaking, writing, listening and reading. They encourage students to interact and communicate. They create a meaningful context for language use.' ' Many experienced textbook and methodology manuals writers have argued that games are not just time-filling activities but have a great educational value. He also says that games should be treated as central not peripheral to the foreign language teaching programme. |
How do OLEDs Emit Light?
OLEDs emit light in a similar manner to LEDs, through a process called electrophosphorescence.
The process is as follows:
- The battery or power supply of the device containing the OLED applies a voltage across the OLED.
- An electrical current flows from the cathode to the anode through the organic layers (an electrical current is a flow of electrons). The cathode gives electrons to the emissive layer of organic molecules. The anode removes electrons from the conductive layer of organic molecules. (This is the equivalent to giving electron holes to the conductive layer.)
- At the boundary between the emissive and the conductive layers, electrons find electron holes. When an electron finds an electron hole, the electron fills the hole (it falls into an energy level of the atom that's missing an electron). When this happens, the electron gives up energy in the form of a photon of light (see How Light Works).
- The OLED emits light.
- The color of the light depends on the type of organic molecule in the emissive layer. Manufacturers place several types of organic films on the same OLED to make color displays.
- The intensity or brightness of the light depends on the amount of electrical current applied: the more current, the brighter the light. |
Biologists have described a chain of biochemical reactions, as a result of which particles of pure gold are formed on the outside of the cell membrane of the Cupriavidus metallidurans bacteria.
Heavy metals and their compounds are toxic to most living creatures, but not to C. metallidurans, which have found a way to extract useful micronutrients from heavy metal compounds such as copper and gold. In the course of this process, there are formed, among other things, grains of gold. A group of German and Australian biologists described the biochemical processes that allow bacteria to do this; The article is published in the journal Royal Society of Chemistry.
Rod-shaped bacteria C. metallidurans live in soils with a high concentration of heavy metal compounds. If you do not take into account this toxic neighborhood, then these soils are suitable for life: they are rich in bacteria-friendly salts, and there is practically no competition for resources. To live in such conditions, one only needs to learn how to process copper and gold. C. metallidurans know how to do this, which was proved by the same scientific team in 2009. And now scientists are ready to explain how bacteria can do it.
Everything begins with treatment on the outside of the membrane of the bacteria: there, copper and gold salts decompose to compounds that can penetrate the cell wall. Copper is necessary for the life of bacteria, but its excess is dangerous. When there is a lot of copper in the cage, it is excreted by the enzyme CupA.
Gold grains on the surface of living C. metallidurans Credit: American Society for Microbiology
This mechanism works until it is prevented by an excess of gold compounds that inhibit the reaction of the enzyme with copper and do not give the poison to be excreted from the body of the bacterium. Then the CopA enzyme enters the work. It turns the compounds of copper and gold back into those that are unable to penetrate the cell membrane. The flow of metals into the cell stops, and the bacterium, using a pause, gets rid of toxic surpluses. The side effect of this process is the formation of small, several nanometers, particles of metallic gold on the outside of the cell wall, completely harmless to the bacterium itself.
In nature, C. metallidurans play a key role in the formation of gold nuggets from primary gold ore, formed as a result of geological processes. Other living organisms extract bound gold from ore and turn it into toxic complex compounds that, as a result of the life of C. metallidurans, turn into harmless metallic metals for all living organisms. |
Argumentative writing is designed to pose a claim and support for the claim using persuasive arguments. Such writing becomes more effective when conjunctions make the points flow smoothly. Conjunctions connect words or phrases together, making a text easier to read. Transitions function the same way -- by unifying a whole piece of writing.
Conjunctions serve as a cue within a sentence, signaling the reader that another idea is coming. Coordinating conjunctions link ideas by showing how they relate. For example, a word like "and" indicates two ideas go together. A subordinating conjunction indicates that one idea depends on another. For instance, in this sentence the word "unless" depends on the action that follows it: We will be late unless we leave now. Correlative conjunctions join elements within a sentence, indicating the two are of equal importance. The words "neither" and "nor" work this way in this sentence: I like neither carrots nor celery.
Transitions serve the same purpose as conjunctions, but on a larger scale. They signal to the reader the relationship between ideas in a paragraph or even between paragraphs. By connecting larger ideas, they let readers know what to do with the information presented to them. Indicating these connections helps reinforce the argument within a paper. Phrases like "for example" let the reader know the information that follows is meant to support an idea. Thus, the use of transitions cues readers into the writer's thinking process.
Conjunctions improve the paper as a whole by giving the writing coherence, or flow. A conjunctive adverb such as "however" or "overall" joins two complete sentences, using either a semicolon or a period. These words and phrases serve different purposes: showing agreement, opposition, causality, support or emphasis, consequence and conclusion. They work like a bridge from one of the writer's points to another. For example, "however" lets the reader know the statement that follows is in opposition to the preceding; "overall" signals a conclusion. These signals guide readers to either reflect on what came before or anticipate what is coming next in the paragraph.
Conjunctions and conjunctive adverbs unite elements of an argument together. When the argument is unified and cohesive, readers are more likely to believe what the writer is saying. Readers need a guide; without this guide, they might get lost in the argument. Readers struggling to follow a writer's thought progression become frustrated and may even stop reading the paper. When a writer takes the time to make the argument more readable, this engenders faith and goodwill in the readers. As Aristotle pointed out, creating that goodwill, what he called ethos, makes people more open to persuasion. |
- Quote when something is said in a unique way or the person saying it has authority.
- Paraphrase when you want to say all the details but there is nothing special about the person you are quoting or the way they said it.
- Summarize when you want to give the general outline, or an overview of a lot of material.
Using the ideas from other people can help you show people that your ideas are valid. Quotes, paraphrases, and summaries can give you evidence, reasons and examples to prove your own points. Remember that your ideas need to be in your own words and that you use research as support. Topic sentences and thesis sentences should always be in your own words and not ideas borrowed from someone else.
Internet Writing: When you are writing on the web, you can mention the name of the source at the beginning of your quote, paraphrase or summary and then provide a link.
School Writing: In academic writing for school, you will do three things:
1. Title and Author: Inside the sentence when you first start using a source, you can mention the title and author and/or use a parenthetical citation (MLA style) or footnote (APA style) at the end of the sentence:
Example: According to Brice Tyson in his book, Dogs Have More Fun, the important thing to know is….. (Tyson 32).
2.Author Tags: If you use more than one sentence to explain the ideas from that source, you can use author tags to let the reader know where those ideas come from.
Example(author tags in bold): Tyson disagrees with people who think that dogs do everything for the pleasure of their owners. Instead, his book states, the dog gets gratification by….This fervent dog lover insists that…
3. Source List: You will also need a Works Cited or Bibliography list at the end of your paper.
Author Tags: An “author tag” is how you identify who said what you are quoting, summarizing or paraphrasing. To make your writing more interesting, you want to give the author name in different ways and this article has a chart for how to do that. It is also possible to add a comment about what the author is saying by using my “Other Words for Said” chart. For example, if you say “the author exaggerates…” you give a negative evaluation of what the author is saying which can help the reader to see the information from your point of view.
When you mention a source, you need to at least tell the name of the writer. Usually, it is also good to tell the title of what you are quoting from too. Additionally, you can strengthen your writing if you explain how that source is going to support your idea. You can do this by including the claim you are trying to support in the sentence, and also by explaining the authority of the person you are citing. Here are some examples:
Good: According to John Miller, “Many mentors feel lost when they encounter children with problems impossible for them to understand” (Miller 23).
Better (includes claim with quote): John Miller explains that it is important for mentors to be trained because, without training, “Many mentors feel lost when they encounter children with problems impossible for them to understand” (Miller 23).
Best (includes claim and reason for quoting this authority): Mentors need to be well trained to be effective. John Miller, international director of Big Brothers, Big Sisters, says that mentor training is essential because “Many mentors feel lost when they encounter children with problems impossible for them to understand” (Miller 23).
It is very important to give your reader clues that you are doing a long summary.
- You can do that by mentioning the author’s name at the start of the summary (first and last).
- Then as you continue, you can use author tags like “Jones says” or “she mentions” or “he explains” as you write (see author tags chart).
- You can also use the name of the book or article instead of the author to break up the monotony of your writing.Example:
John Miller, in his article, “How to Mentor,” suggests many ideas for effective mentoring programs. One idea he promotes is xxxxx. Miller also says xxxxx. A final suggestion in the article is that xxxxxx (Miller 34).
- Don’t Quote a Lot. When should you use it? When the author says something in a unique way which would lose impact if you paraphrased or summarized, or when the author is a unique authority on the subject and quoting them makes your argument stronger. In general, I wouldn’t use more than one quote per page or per about 250 words, or 3-4 times in the average Hub page or college essay.
- Short, Not Long Quotes. Most quotes should be only one or two lines of type. If it is longer than that, you should generally paraphrase or summarize.
- Use Quotation Marks Correctly! I have to include this one because so many of my college students do this incorrectly: quotes must be included INSIDE your own sentence and not as a sentence with quotation marks around it. Look this up if you aren’t sure.
Incorrect: “Training is important for mentors” (Miller 23).
Correct : According to John Miller, “Training is important for mentors” (Miller 23).
Paraphrasing is tricky because you don’t want to plagiarize the source by coming too close to it in your re-write.You must keep the original meaning but use different vocabulary and a different sentence structure. The best way to do a paraphrase is:
- Read Several Times: Read the passage carefully several times until you feel you understand what it is saying.
- Write Without Looking: Without looking at the passage, write your own version of it, using your own vocabulary and way of phrasing.
- Compare: Next, look back at the original and tweak your version to make sure it isn’t copying but does say the same thing.
- Style: Remember, the paraphrase should sound like your own writing, not the source you are quoting. The paraphrase should have the same tone and style as the rest of your paper.
- Use Turnitin Check: If your course uses Turnitin.com and your professor allows you to upload and look at your own papers, this is a wonderful way to see if your paper has too many words that are the same as your source.
No, if there are key words or special vocabulary in this subject, you can keep those in your paraphrase.Also, if there is one unique phrase you want to include, just use quotation marks around it. |
Modern and medieval drama have more differences than similarities. Drama has evolved and expanded and is now much more complex than its medieval roots. The whole purpose of acting, drama, theater and entertainment has shifted significantly. Drama has become more advanced and versatile.
During the Middle Ages, plays were primarily religious in content. Passion plays, mystery plays, miracle plays and morality plays all depicted stories and themes from Christianity and, primarily, the Bible. Clergymen wrote plays, sometimes in Latin, with the intention of the plays being performed as part of religious instruction methods or religious celebrations. Humor crept into plays over time. Modern drama has a diversity of themes and explores genres, cultures, experiences and issues.
Religious plays of medieval times had informative, realistic and melodramatic acting styles. Characters were stereotypically depicted in an informative storytelling fashion. Today, drama is primarily realistic in style but also symbolic, ritualistic and even abstract. Experimentation with style and presentation is standard in modern drama.
Actors in the Middle Ages were primarily male with the exception of some female actors permitted in France. Actors were poor and considered at the bottom end of society. Today, actresses fill countless roles and are some of the richest and most idolized members of society. Switching up gender and gender roles is part of the experimental process of modern theater and does affect the tension created in a dramatic piece.
Religious plays of medieval times were originally mounted in churches. As the plays' set designs expanded, the church buildings became too restrictive and the performers took their drama to the streets. Acting troops formed and toured their plays in wagons. The influence of street performers, like traveling musicians and circus performers, became integrated into the religious plays. Today, some traveling acting troops still exist, but most performances are either housed in theaters or captured on film and available on the television and Internet.
During the Middle Ages, passion, mystery, miracle and morality plays could hardly be called entertaining, because they began as vehicles to teach religion rather than amuse the masses. But as spectacle, humor and sensationalism became part of these religious plays, audiences responded with awe, laughter and approval. Today, drama has more subtle, intellectual and intricate forms. Technology still provides sensationalism, but sophistication has become part of dramatic entertainment.
- Comstock/Comstock/Getty Images |
The previous blog post focused on generalized number bases – an essential concept of electrical and computer engineering (ECE) that is often not covered well at the high school level. Another area where pre-college preparation often falls short of college ECE expectations is the relationship between frequency, period, and wavelength for electromagnetic waves. This is often covered briefly in high school physics and chemistry courses, but those courses typically use scientific notation instead of engineering units. As a result, students cannot mentally translate among engineering units of frequency, period, and wavelength at conversational speeds.
Relationship between frequency, period, and wavelength
As a reminder, this table summarizes the basic relationships between frequency, period, and wavelength of electromagnetic waves in a vacuum.
Word / Abbreviation
Word / Abbreviation
Microsecond (μs or us)
Applications – RF, circuit boards, and design validation
Engineers discussing wireless signals may talk about a 2.4-GHz signal, and it is helpful to be able to think of the associated period as being a bit more than 400 picoseconds. Similarly, a 5-GHz signal has a 200-picosecond period. Given the increasing diversity of radio frequency (RF) applications, especially those associated with the growing Internet of Things (IoT), an intuitive understanding of frequencies and their various physical properties is helpful to ECE students.
This concept is also critically important in designing circuit boards and interconnects. Electromagnetic waves propagate at approximately 3×108 meters per second (the “speed of light”), so the wavelength of a 1-GHz signal in a vacuum is 30 cm (about one foot). Depending on the physical material, the propagation speed of a signal in a circuit board or cable may be significantly slower, so seemingly minor changes in circuit layouts can cause significant propagation delays and timing challenges in high-frequency circuits.
Furthermore, understanding this concept is helpful in design validation. For example, suppose you need to measure a signal’s edge with a rise time of 15 nanoseconds. How fast must you sample that waveform to accurately catch that edge, including overshoot and any ringing that may occur? Facility with mental math will help you make correct test choices quickly.
The fundamental arithmetic of electromagnetic waves occurs with great frequency in ECE coursework and in professional practice after college. The high school student who becomes adept with the various relationships in engineering units will start ECE studies with a significant advantage. |
The word umbrella originates from the word ‘Umbra’ which means the shade cast by an opaque object. In 1609 there is a mention in the English Poet John Donne’s letters. He uses the term ‘ombrello’. This was then altered with the influence of the word ‘Umbra’, from ‘Umbella’ to ‘Umbrella’: the word we recognise immediately today.
The Umbrella began to be used in England around c.1700 as a shelter from the elements. However Umbrella type sun shades had been used around the world in various forms for centuries. The Ancient Egyptians used a version of an Umbrella as did the Ancient Civilisations of China, Greece, Rome and the Aztecs. In Africa and the Orient an umbrella was seen as a symbol of dignity- which is something the Umbrella Workshop agree with!
The first rain umbrella carried by a man was thought to be in 1760. Jonas Hathaway, a noted traveller and philanthropist and obviously a man of taste to be carrying an umbrella!
At The Umbrella Workshop we are proud of our Umbrellas and delighted to be creating a product with such an interesting heritage.
Umbrellas in the 18th Century
For a detailed account of the usage and documentation of umbrellas in 18th Century England, please see this great resource, from the Jane Austin Centre, just down the road from our Office. |
Comment for Sidereal Times, November 2012
by Freeman Dyson, Institute for Advanced Study, Princeton, New Jersey
Fifty years ago, when AAAP began, the piece of the universe that we had explored was tiny, just a little blob with us in the middle. We could see a lot of stars and galaxies, but the volume of space that we could see was only about a tenth of one percent of the universe. With our biggest telescopes we could measure distances of galaxies about a tenth of the way to the edge, if the universe had an edge. We did not imagine that within our lifetimes we would be able to see out all the way to the edge. Now, fifty years later, everything has changed. The whole shebang is pretty well explored. In any direction, if we look for faint objects, we can see almost all the way back to the beginning of time. Some huge gaps remain, but the entire universe is now within our field of view.
How has this change happened? It did not happen because we are smarter now than we were fifty years ago. It happened because we have better tools. The most important new tools were radio telescopes. Fifty years ago, we already had radio telescopes, and they had discovered large numbers of sources of radio waves in the sky, but only a few of them could be identified with visible objects. It seemed that the radio telescopes and the optical telescopes were looking at different universes. There was a curious lack of connection between the two universes. Amateur astronomers could only look at visible objects. They had not much reason to be interested in these radio sources that nobody understood. Nobody could tell how far away a radio source was if it did not have an optical identification. There was only one fact that suggested that radio sources might be at huge distances. If they were objects randomly distributed in space and time, we ought to have seen more faint sources. There were too few faint sources to be a random distribution. The radio astronomers explained the lack of faint sources by saying that they had hit the edge of the universe. If they were seeing sources all the way out to the edge of the universe, then the lack of faint sources made sense. The optical astronomers mostly did not buy this argument. The optical astronomers thought it was a weak argument for making such a big claim. If the sources were really out near the edge of the universe, they would have to be absurdly powerful.
The first big breakthrough happened in 1963, one years after AAAP began. Maarten Schmidt, an astronomer working at the Palomar observatory, photographed the spectrum of an optical object that coincided with a bright radio source called 3C273, and found absorption lines of hydrogen with a red-shift of 0.18. The optical object is magnitude 13, bright enough to be seen by amateurs with a six-inch telescope, and the red-shift says that it is two billion light-years away. That means that the object really is absurdly powerful. Both in visible light and in radio waves, it is putting out about a hundred times the power of a big galaxy. So the radio astronomers were right. The radio sources are absurdly powerful, and a lot of them are close to the edge of the universe. This discovery had two major consequences. Optical and radio astronomers started to work together, finding optical identifications for radio sources and measuring their red-shifts, which soon confirmed that optical sources can be seen as far away as radio sources. Also, the only plausible theory to explain a super-powerful source was a super-massive black hole at the center of a galaxy, sucking in gas which radiates away prodigious amounts of energy as it falls into the hole. Black holes quickly jumped from being esoteric theoretical toys to being big players in the evolution of the universe.
The second big breakthrough was the discovery of the cosmic microwave background radiation by Arno Penzias and Robert Wilson in 1965. It was another jump outward, with radio astronomers reaching further than optical astronomers could see. The background radiation gives us a picture of the universe about half a million years after the big bang, when the cooling matter first became transparent to its own radiation. We were lucky to have David Wilkinson, one of the world’s leading radio astronomers, here in Princeton. He organized the design and construction of two space missions, COBE, short for Cosmic Background Explorer, and MAP, short for Microwave Anisotropy Probe, to observe the fine details of the microwave radiation. To our great sorrow, David died soon after MAP was launched, and MAP became WMAP, short for Wilkinson Microwave Anisotropy Probe. The local variations in brightness observed by WMAP give us a direct view of the early stages of the evolution of everything in the universe.
The third big breakthrough happened in 1967, when Jocelyn Bell, a graduate student doing radio observations in England, discovered pulsars, the pulsating radio sources which turned out to be rapidly rotating neutron stars. Once more, radio and optical astronomers talking to each other could understand things much better than either could separately. One of the major mysteries in the old days was the Crab Nebula, an object that is probably familiar to most of the members of AAAP. It is number one on Messier’s famous list of fuzzy objects in the sky. The Crab nebula was known to be a supernova remnant, consisting of debris thrown out by a star that exploded in the year 1054. The supernova was seen by astronomers in Korea and China but not in Europe. The mysterious thing about the Crab Nebula was that it was too bright to be a passively expanding gas cloud. It must have an active source of energy causing it to radiate brightly. As soon as pulsars were discovered and identified as spinning neutron stars, it was obvious that the energy source keeping the Crab Nebula bright might be a pulsar. The pulsar would be unusually vigorous since it was only a thousand years old. As soon as people looked for it they found it, both as a radio source and as an optical source, spinning thirty times a second.
I was lucky then to be a friend of David Wilkinson. Just for fun, David invited me to spend a night observing with him at the Princeton campus observatory on Fitzrandolph Road, looking at the Crab pulsar. David made a shutter spinning thirty times a second and put it on the eyepiece of the one-meter telescope. We could see the pulsar appearing and disappearing as he varied the phase of the shutter. The pulses were so strong that we did not need to worry about all the background light from the town and the football stadium. We could make a quite accurate light-curve showing the shape of the double pulse with a period of thirty milliseconds. This spectacular object could have been discovered fifty years earlier if anyone had had the crazy idea of putting a spinning shutter onto a telescope. But until Jocelyn Bell found the pulsars, no astronomer in his right mind could imagine a star spinning thirty times a second.
Besides radio telescopes, we have many other wonderful new tools since AAAP began. One of the most important new tools is the digital camera, which collects light far more efficiently and measures it more accurately than the old-fashioned photographic plates. Megapixel cameras have now become standard equipment for amateurs as well as professionals. Here in Princeton, Jim Gunn designed and built the top-of-the-line digital camera that was used to do the Sloane Digital Sky Survey. The Sloane Survey was a project to photograph the entire Northern hemisphere sky with high resolution in four colors, and put the images into digital memory. It was a combined effort of a number of universities including Princeton. The Sloane Survey output is available to anyone with enough computer bandwidth to use it. If you have access to the output, you can study the sky in daytime or on cloudy nights, without the trouble and expense of traveling to a big telescope. The Sloane Survey is still going on. The original sweep of the northern sky took four years, and after that the camera came back to look more deeply at areas of sky that are particularly interesting for various reasons. The output is a huge gold-mine of astronomical information waiting to be excavated. With accurate four-color measurements of brightness, you can tell whether a point-like object is a nearby star or a distant galaxy, and you can measure its distance. You can do a rapid computer search and pick out large numbers of distant objects close to the edge of the universe. You have an unbiassed view of everything that shines in the sky, from near-earth asteroids to remote clusters of galaxies.
Another set of great new tools are the telescopes in space. The most famous is the Hubble Space Telescope, which is still up there after 22 years, making important discoveries about once a week. Two Princeton astronomers, Lyman Spitzer at the University and John Bahcall at the Institute for Advanced Study, were the most effective promoters of the Hubble telescope and persuaded the politicians in Washington to pay for it. As soon as it was launched and operating, John Bahcall used it to observe the bright objects that were believed to be super-massive black holes and to prove that they are really at the center of galaxies. The central objects are much brighter than the galaxies, so he needed the superior resolution of Hubble to see the galaxies. Hubble has ten times better resolution than any telescope on the ground, and as a result it can see objects that are about a hundred times fainter. Because Hubble can see ultra-faint objects, it was given the job of taking long exposure pictures of a small patch of sky known as the Hubble Deep Field. The Deep Field pictures give us our clearest view of the universe as it was in the remote past near to the beginning.
Besides Hubble, there are many other telescopes in space that are not so famous or so expensive but equally successful. The most recent is Kepler, which went up three years ago and discovered huge numbers of planets orbiting around other stars. This small telescope totally transformed our view of extra-solar planets. We had imagined that extra-solar planetary systems would be like our own Solar System, but they turn out to be quite different. Other space telescopes have looked at the sky in wave-lengths invisible from the ground, Chandra looking at X-rays, Spitzer looking at infra-red radiation, IUE, short for International Ultraviolet Explorer, looking at ultra-violet. Each of them found new kinds of objects and unexpected behavior of old objects. Together, they gave us a far more complete picture of the complicated ways in which the universe evolves.
One of my most vivid memories is a visit to the Goddard Space Flight Center in Maryland, the day after Hubble was launched. There were two buildings side by side, one containing the command center for Hubble, the other containing the command center for IUE. In the command center for IUE, there were only two people, both of them graduate students, calmly controlling the telescope and observing one ultra-violet object after another without any waste of time. The telescope was in geosynchronous orbit over the Atlantic, so it was working twenty-four hours a day. Observers at Goddard were taking turns with observers in Europe to use it. It was easy to use and produced a steady output of good science. In the command center for Hubble there were three hundred people in a state of total confusion. Nobody knew what had happened to the telescope. It was in a low orbit, spending only a few minutes within range of Goddard each time it passed by. After communications had broken down, it was difficult to regain contact. Three hundred people were all talking at once and nobody seemed to be in charge. After I left, the muddle was gradually sorted out and Hubble started to produce good science too. But the low orbit is still a big problem for anyone using Hubble. The earth is constantly interrupting the observations, and the telescope is actually observing less than a third of the time. A big bureaucratic organization is needed to schedule the operations. In the end, both Hubble and IUE did marvelously well. The low orbit of Hubble made it possible for astronauts in the Shuttle to go up to repair and replace instruments. Pictures from Hubble gave the public spectacular views of the universe. But if you measure cost-effectiveness by the output of scientific papers per dollar of input, then IUE comes out far ahead.
The most recent revolution in astronomy is the discovery that only three percent of all the mass in the universe is visible. All the stuff that we see, stars and planets and gas-clouds and dust-clouds, is only three percent. The remaining 97 percent is invisible. It consists of two separate components, dark matter which is about 27 percent and dark energy which is about 70 percent of the mass. Dark matter was first discovered by Fritz Zwicky in the 1930s, when he did the first sky survey with his little 18-inch telescope on Palomar Mountain. In those days the astronomers did not take Zwicky seriously because he was a physicist and not a member of their club. Now we take dark matter seriously because we can measure its gravitational effects accurately, and we find it to be distributed through the universe in roughly the same way as the visible galaxies. The dark energy was discovered more recently by measuring accurately the rate of expansion of the universe at various times in its history. Quite unexpectedly, the rate of expansion was found to be accelerating with time. The measurement is done by observing large numbers of supernovas exploding at various times, going back billions of years into the past. Zwicky was also the first person to observe supernovas systematically, but in the 1930s he could not observe enough of them to see the acceleration. Our knowledge of the invisible universe comes from our modern tools, big sky surveys and big computers. These tools collect vast amounts of accurate information and process it rapidly, picking out the evidence for gravitational effects of invisible mass from the behavior of the stuff that we can see.
Fifty years after AAAP began, our new tools have given us a new view of the universe. The new universe is full of violent events such as gamma-ray bursts and supernova explosions. It is full of invisible stuff which we do not understand. It is as full of mysteries as it was fifty years ago. But the new mysteries are not the same as the old mysteries. The old mysteries were mostly solved, and the new mysteries mostly discovered, as a result of new ways of observing. In the future, we can be confident that new tools will continue to solve old mysteries and discover new ones. Luckily for AAAP, the new tools are narrowing the gap between amateur and professional astronomers. Amateurs will play an even more important role in the future than they have in the past. Serious amateurs now have wide-field electronic cameras and computers that can produce data of professional quality. They also have one resource that the professionals lack, plenty of observing time. |
Caroline Fraser’s book Rewilding the World is a call to retrofit more than a century of nature conservation in the United States and around the world. Why, at this late date, is it so important that we redesign the global conservation system? Conservationists are rightly proud of their collective accomplishment in bringing some 12 percent of the earth’s land under protection so that future generations may know and enjoy nature.
Why should this success now be in question?
The answer lies in the fact that our zeal for conserving nature far outran the science of how to do it. The modern conservation movement dates to the founding of the World Wildlife Fund in Europe and the Nature Conservancy in the United States after World War II.
Both organizations hired scientists to advise them, but the scientists found themselves having to invent programs and priorities out of thin air. Conservation did not have a solid scientific basis until a conference in San Diego in September 1977. Before that, scattered articles presented results of studies that could, by inference or extension, suggest conservation strategies, but as often happens in science, controversy erupted over the interpretation of the results, and when scientists disagree among themselves, everyone else stops listening. The San Diego conference was brilliantly conceived to bring the scientific community together in a consensus that would move the field forward.
Conservation biology had failed to develop earlier because it confronted a methodological impasse: the difficulty of studying the process of extinction of species. Indeed, one definition of conservation biology is that it is the science of why extinctions occur and how to prevent them.
Until the 1970s, nearly all scientists who studied extinction were paleontologists who studied fossils. Extinctions are abundantly registered in the fossil record, but the great majority of ancient organisms appeared and disappeared without apparent cause. Exceptions occurred in
rare global mass extinctions, of which there have been only five since the origin of multicellular life, one of which was the meteorite impact that ended the age of dinosaurs.
Between mass extinctions, which have occurred at intervals of roughly 100 million years, there were countless extinctions of individual species taking place in the “background.” But the rate was so slow, approximately one in a million species per year, that it didn’t appear relevant to a
world in which native habitats were disappearing at an alarming rate and countless animals and plants were being exploited for commercial purposes. If humans were going to “manage” nature so as to prevent extinctions, an entirely new branch of science was needed to address the
The need to retrofit the current conservation system arises out of science that developed during the latter half of the twentieth century. Fraser provides an introduction to this science as the rationale behind her call for “rewilding the world.” What, exactly, she means by “rewilding”
will emerge from the following condensed account of the relevant science.
ven today, when extinctions are occurring with unprecedented frequency, they are extremely difficult to verify. The ivory-billed woodpecker is a prime example. The last substantiated photographs and sound recordings were made in the late 1940s, yet rumors about the bird’s existence and fervent claims of sightings continue to emerge from the Southeast. Is the ivorybilled woodpecker extinct? No one can say. Hence the extraordinary difficulty of studying extinction as a process.
As often happens in science, a solution to this impasse came from unexpected quarters. Two brilliant young biologists, Robert MacArthur, then of the University of Pennsylvania, and Edward O. Wilson of Harvard, published The Theory of Island Biogeography in 1967. The theory is based on a relatively simple finding: the number of species of birds, lizards, or other animals that occupy an island can be quite accurately predicted if one knows nothing more about the island than its area and its distance from a source of colonizing species. Nearby islands are more frequently colonized by new species than distant islands, and large islands support more species, on average, than smaller islands.
MacArthur and Wilson proposed that the number of species of a given type (birds, lizards, etc.) that occupies an island is maintained by a dynamic equilibrium between colonization of the island by new species and extinction of those already present. Colonization and extinction are thus represented as normal, ongoing processes that interact to regulate the number of species present at any time. Over long periods (decades, centuries, or millennia, depending on the size of the island), an island’s complement of species was posited to change while the total number
of species remained more or less constant.
Soon after publication of this theory, biologists began to realize that it could be applied to conservation if one assumed that shrinking remnants of natural habitat, or parks surrounded by agricultural lands, were analogous to islands. There followed a rush to search for historical records listing which species had been found on a given island or in a particular suburban park decades or even a century earlier. Such efforts quickly yielded support for the theory by showing that, indeed, the species complements of some small islands and habitat fragments had changed over the time elapsed between surveys, whereas the faunas of large islands and habitat fragments appeared to remain constant.
Still, there were skeptics who maintained that appearances and disappearances of species from tiny islands and habitat patches were trivial because of the small numbers of individuals involved, and who asserted that there was no evidence that species went extinct in insular areas
large enough to be of relevance for conservation. So long as these dissenting voices were contesting the validity of the MacArthur- Wilson theory, conservation planners and managers remained gun-shy and conservation practice continued as the opportunistic process it had always been.
ny lingering doubts about whether island biogeography theory was relevant to conservation were dispelled in 1987 by a paper published in the prestigious British journal Nature, a study that Fraser calls “a bombshell, the kind of logical observation that seems obvious only in
retrospect.” Written by William Newmark while he was a graduate student at the University of Michigan, it presented startling evidence that numerous mammal species had disappeared from national parks in the western US and Canada. Newmark benefited from research conducted
decades earlier by the US Biological Survey, a precursor of today’s Fish and Wildlife Service.
The survey had systematically documented the mammal species present in each national park at the time it was established. Most of the parks Newmark studied dated to the early decades of the twentieth century and were between sixty and ninety-five years old at the time of his
research. Using a variety of approaches, Newmark compiled contemporary data on the mammals occupying each park and compared the lists to those assembled earlier by the Biological Survey.
The contrasts between the two sets of lists stunned the entire conservation world. Bryce Canyon, Lassen Volcanic, and Zion, the three smallest parks, had each lost more than a third of the mammals larger than rats and chipmunks known to be present at the parks’ establishment.
As expected from island biogeography theory, small parks had lost many species whereas large parks had lost few. The only parks in the study that retained all their species were Banff, Jasper, Yoho, and Kootenay, a back-to-back cluster of parks in the Canadian Rockies comprising an
area roughly the size of Massachusetts. All US parks in the study lost species although the largest, Yellowstone, lost only one, the gray wolf, and that was owing to a campaign of extermination initiated by the US government. Still, populations of all the species Newmark documented as disappearing from US parks survive today in other locations.
Mount Rainier, Rocky Mountain, Yosemite, Grand Canyon—these are among the crown jewels of conservation in the United States and they were demonstrably failing to retain their biodiversity. Only a few years later, Newmark and another American biologist, Justin Brashares, produced similarly disturbing data for national parks in East and West Africa, respectively. It seemed as if parks could not save biodiversity, and ideological opponents of parks seized upon the results to make the claim that parks didn’t work, so why have them? The conclusion was a simplistic overreaction by people who did not comprehend the underlying science. Island biogeography predicted that a contraction in area, such as that experienced by a park after its surroundings have been converted to human uses, would lead to local extinctions, but it could not predict which species would disappear or why. Again, conservation science desperately needed a new theory, and fortunately, intimations of one soon appeared in the form of an ecological phenomenon known as “mesopredator release.”
As Fraser notes, the landmark paper that established mesopredator release as a driver of local extinctions was written by Michael Soulé, a professor at the University of California–Santa Cruz, and his graduate student Kevin Crooks. The two of them studied bird communities in
twenty-eight canyons in San Diego County and found that some of the canyons resounded with the songs of a full complement of native birds, whereas others supported barely more than house sparrows, starlings, and park pigeons. The important difference between the two sets of
canyons turned out to be the existence of habitat corridors that allowed coyotes to enter the canyons that supported vibrant native bird communities. In contrast, the ornithologically dead canyons were embedded in an urban/suburban landscape that offered no access to coyotes
living outside the city.
Coyotes and birds? What was the connection? Coyotes don’t eat songbirds; they have nothing directly to do with them. The connection was cats, both domestic and feral. Coyotes can and do eat cats, and where coyotes are present, they impose a reign of terror on neighborhood cats,
deterring them from entering canyons and hunting birds. So, indirectly, coyotes are good for birds. Where there are no coyotes, cats have nothing to fear and dedicate themselves to hunting, with strongly negative consequences for the bird community.
Cats are considered a “mesopredator,” one of a group of medium-sized predators that are normally held to low abundance by top predators, in this case coyotes. Other mesopredators are raccoons, foxes, and opossums. Where top predators like wolves, coyotes, and mountain lions
are persecuted, their prey tend to increase, and can become up to ten times more abundant than where the top predators are present. Fox, feral cat, and raccoon populations have exploded all over the United States following eradication of top predators in the nineteenth (in the East) or
early twentieth century (in the West) and, as Fraser aptly puts it, “ran wild in an orgy of predation.” Another consequence of predator elimination is “herbivore release,” familiar to many dwellers of suburbia as an overabundance of deer, beavers, and woodchucks.
Population explosions of mesopred- ators and herbivores have dire consequences for biodiversity. Mesopredators not only prey upon songbirds, but seek myriad other small prey as well, including frogs, lizards, snakes, salamanders, and small mammals. Many once-common species, Fraser observes, are now scarce or locally absent as a consequence. Overabundant herbivores have equally dire effects on vegetation, completely suppressing forest regeneration in the worst cases and eliminating many wildflowers.
These chain reactions, called a “trophic cascade,” can have major economic implications beyond loss of biodiversity. Oak forests, once dominant throughout most of the eastern United States, are gradually being replaced by red maple, tulip poplar, and other low-value species through a
combination of two factors: the suppression of fires, which oaks survive more easily than other species, and deer browsing on acorns and oak seedlings. As these degenerative processes run their course, the landscape and its biotic communities are being transformed into what
ecologists term an “alternative state,” a topsy-turvy ecosystem in which formerly abundant species (oaks) become rare and formerly uncommon species (maples) abound. The diversity of such an alternative ecosystem is likely to be far less than that of the original.
Wholesale transformation of familiar ecosystems into alternative states is an unsettling prospect. The good news is that conservation science can prescribe a remedy, even if it is one some will deplore. The remedy is to restore the function of predators to natural communities by
reintroducing them. Interior Secretary Bruce Babbitt knew this when he released wolves into Yellowstone National Park in 1995 after they had been absent for seventy-five years. In the subsequent fifteen years, wolves have transformed the Yellowstone ecosystem. Thickets along
rivers and lakes that had been browsed out of existence by elk and moose have recovered dramatically, providing habitat for beavers. Beavers have obligingly recolonized after a long absence. By building dams, beavers create wetlands that provide habitat for fish, frogs, and
nesting waterfowl. Revegetated stream banks reduce erosion and lower water temperature, favoring trout over less desired species, and invite the return of formerly common birds. The Yellowstone ecosystem is on the way to recovery. This is what Caroline Fraser means by
“rewilding”—the restoration of natural function to ecosystems, thereby stabilizing their biodiversity.
Rewilding is the current conservation fad, but is it the real thing? So many conservation fads have come and gone in the past; why isn’t rewilding just another? My own assessment is that this time, the situation is different. Rewilding didn’t just emerge as a fund-raising gimmick; it is
the application of a now mature science of conservation biology, theoretically grounded in the concepts of island biogeography, trophic cascades, and alternative states.
Rewilding seeks to stabilize native ecological communities by encouraging and restoring the processes that, in unperturbed nature, prevent mesopredator release, herbivore (deer) overabundance, and transitions to alternative states. Foremost among these natural processes is
predation, a biological process, but physical processes such as fire and flooding are also important. For more than a century our government has aggressively implemented predator control programs over much of the West to favor livestock growers and, to support other constituencies, has systematically suppressed wildfires and dammed rivers. Mesopredator release, deer overabundance, dangerous accumulations of fuel for fires in western forests, and blocked migrations of salmon, sturgeon, and other fish are the direct consequences of these policies. In other words, we have deliberately, if unwittingly, created the problems; rewilding is a prescription for fixing them.
The emotive ring of the word “rewilding” conceals the fact that it is much more than simply a hazy ideal; lying behind it are the proudest achievements of conservation science. Rewilding is shorthand for the three C’s: carnivores, cores, and connectivity, practical applications that
follow directly from established theory. Island biogeography showed that species disappear from small preserves, affirming the need for large, strictly protected core areas, such as national parks. But island biogeography offers no hint about why species disappear so quickly from
Research on mesopredator release and herbivore overabundance has revealed that powerful drivers of extinction are unleashed by the elimination of top predators. Thus the way to restore full ecological function is by establishing core protected areas that can sustain breeding
populations of top carnivores like wolves and mountain lions and then connecting them via habitat corridors that permit movements of individuals between cores. Connectivity across the landscape is crucial because small populations of wolves, bears, or other species trapped in
isolated core areas are vulnerable to inbreeding and accidental extinction, as Newmark so powerfully demonstrated. Maintaining connections between protected areas, in other words, expands effective population size and allows the interconnected whole to resist forces of extinction.
At the end of the first decade of the twenty-first century we find ourselves with an (albeit magnificent) set of national parks established long before anyone ever imagined such a thing as conservation science. Consequently, in light of what we now understand about conserving nature, our parks are too few, too small, and not in the best places; hence the need for retrofitting. Such a belated realization was succinctly articulated by Álvaro Ugalde, former director of national parks of Costa Rica:
Out of ignorance, we created a park that is too small. At the time, we thought it was gigantic. What the years have proven, though, is that Corcovado [a Costa Rican national park] is very small,…especially when we talk about critical species such as the jaguar, peccary, and harpy
This sums up the global picture.
aroline Fraser, a host of scientists, and a swelling throng of conservationists in all corners of the globe are calling for rewilding as the best hope for restoring something resembling primordial nature on this overstressed planet. The science they rely on is sound and now thoroughly
vetted, though not yet entirely free of controversy, for old ideas always die hard. Some conservation organizations and government planning agencies have already accepted the idea of rewilding and are running with it. Rarely has new science found such quick and enthusiastic
Fraser dedicates much of her book to telling the varied stories of how the three C’s are being applied around the world, in the Americas, Europe, Africa, Asia, and Australia. Each story explores a different setting, distinct in its economic, cultural, and biological features. The goals
range from modest (making a greenbelt out of the former death zone that marked the iron curtain) to grandiose (a four-nation transboundary megareserve in southern Africa), but the purpose is always the same, that of restoring connectivity to a fractured landscape.
The movement to rewild the earth benefits from the virtue of common sense—we should restore the processes that sustained nature before humans disrupted them. It is big-picture conservation. It offers a vision of a more nature-friendly world, with something for everyone, a
clean and spacious environment available for hunters, fishers, campers, hikers, and adventurers, encompassing private lands and managed commercial activity in addition to strictly protected core areas. But how do we achieve it? This is the big question that has yet to be resolved, and it is the question that Fraser addresses in nearly every chapter.
Science can prescribe the three C’s, but science can’t save nature; only people can save nature. People represent both the hope and the challenge, for the goal of rewilding does not sit well with everyone. The first attempt at reintroducing the Mexican wolf to the Southwest is a case in
point. Following the success of gray wolf reintroduction into Yellowstone, in 1998 Secretary Babbitt opened gates that released eleven Mexican wolves into Catron County, New Mexico.
Shortly afterward, Fraser writes, the alpha female was shot “along with her mate and several pups”; rumors swirled that local ranchers were offering a bounty of $10,000 for a dead wolf. Within months the program had failed; all the wolves had been killed or removed by federal authorities.
In contrast, in my state, North Carolina, red wolves were reintroduced in 1987 with little incident and have thrived, spreading from the release site into several nearby counties. Local communities now proudly advertise the animal’s presence with road signs cautioning “red wolf
crossing.” The difference between the two situations is that North Carolina suffers from an overabundance of deer and lacks an outdoor livestock industry in the part of the state where red wolves now roam.
Of the three C’s, carnivores top the list, for without them all else will fail. And as the example of Catron County, New Mexico, demonstrates, carnivore reintroduction is fraught with challenges. Wolf reintroduction is hotly opposed in the West, in part because wolves have been absent for a
hundred years and ranchers have not had to accommodate to them. Wolves are much less controversial in Canada, where they were never systematically eliminated. Tolerance is the key to successful carnivore reestablishment, but tolerance of big, dangerous animals is in short
supply in the US. Contrast, for example, our attitudes with those prevalent in India, where tigers kill many citizens every year yet benefit from widespread public support.
Let me be clear about one point. No one, not even the most ardent proponent of rewilding, is proposing to restore wolves and grizzly bears to suburban backyards. In part, it is the very impossibility of doing this that creates the need for rewilding. Rewilding is necessary to save
nature in all its beauty and diversity, but where wild nature includes animals like elephants, lions, tigers, or grizzly bears, conflicts between them and local populations are almost inevitable. Where big, dangerous animals are involved, rewilding can best be accomplished in expansive landscapes supporting low human densities. The romantic notion that people and nature can coexist in harmony is fanciful and came out of Europe after large, threatening animals had long been eliminated. The reality is that most people simply do not tolerate large carnivores in the vicinity of children or domestic animals. Segregation via remoteness or elephant-proof fences, as in South Africa, offer the best practical solutions.
Fraser is keenly aware of this and plows straight furrows through the ideological minefields of conservation politics, applying a clear-eyed empiricism to allaying the fears of ranchers and tempering the idealism of politically correct big-city conservationists. She keeps her eye on the
larger goal while dispassionately analyzing the human dimension of each situation. The basic goals of rewilding are universal—the construction of ecological corridors and the piecing together of megareserves linking cores, corridors, and buffer zones. How to achieve these goals in an overcrowded, resource- hungry world is the central question of the book.
“We are realizing,” Fraser writes, “that conservation is not about managing wildlife as much as it is about managing ourselves—our appetites, expectations, fears, our fundamental avariciousness. If we do not succeed at that, other forces assuredly will.”
Conservation is indeed not so much the management of nature, but the management of people.
And wherever one goes, people have distinct traditions, outlooks, and economies. Every project requires deep insights into the psychology, aspirations, and circumstances of the local residents.
Huge amounts of money—billions—have been wasted because international donors did not take such nuances into account—just as our military made serious miscalculations in Vietnam, Iraq, and Afghanistan by failing to comprehend the historical and cultural setting of its engagements.
Fraser examines the tensions between conservation and development, between people and wildlife, and between top-down and bottom-up approaches to conservation. She describes a bottom-up (or “grassroots”) process that is working well in southwestern Australia because
farmers, ranchers, and aboriginals, even when tensions exist among them, all see positive benefits in rewilding. Aboriginals have seen the native vegetation and the game it supports disappear, whereas farmers and ranchers, through clearing the bush on huge scales, have created a dust bowl that threatens their livelihoods. Unfortunately, such happy convergences of interests are more often the exception than the rule.
A top-down, government-mandated approach can work in wilderness zones where there are no people to be affected by imposed land-use restrictions—indeed, there can be no other approach. But promoting conservation in regions that already support a human population requires close attention to the needs and aspirations of the people. In many parts of the world, but especially in Africa and India, wild animals are viewed as competitors of livestock and threats to agriculture and human safety. People do not want to share their habitat with wildlife unless
there are ways of mitigating the effects of doing so. How to provide mitigations in a way that promotes both conservation and economic progress is a theme that Fraser pursues with sensitivity and realism.
Obviously, top-down and bottom-up approaches are not alternatives; they are complements. The forging of national or international commitments necessarily involves top-down processes, whereas land tenure issues, usufruct rights, and tourism development must be addressed at the
local level. Neither approach, in itself, is likely to be sufficient. An appropriate balance between the two is optimal, but achieving such a balance requires a sophisticated knowledge of the politics at both levels, something that is rarely achieved in short-term projects financed by
To many the notion of rewilding will be dismissed as romantic fantasizing, and indeed it may so prove. But before dismissing the idea, one should consider the alternative—an entirely utilitarian world, scientifically managed perhaps, but lacking in the aesthetic rewards nature provides. The beauty and splendid solitude of wild nature will become memories recalled in schoolbooks, no more relevant to the here and now than stories of dinosaurs and wooly mammoths. If this is the world you want to pass on to your offspring, then go ahead and ignore Caroline Fraser’s book. But how could a world without nature, however many its material comforts, be a better world for anyone? Rewilding offers a more balanced vision of the future world, and what could be a more powerful inspiration than that?
Why We Must Bring Back the Wolf
by John Terborgh
The New York Review of Books JULY 15, 2010
Rewilding the World: Dispatches from the Conservation Revolution
by Caroline Fraser
Metropolitan, 400 pp., $28.50
Carole Baskin, CEO of Big Cat Rescue
an Educational Sanctuary home
to more than 100 big cats
12802 Easy Street Tampa, FL 33625
813.493.4564 fax 885.4457
Caring for cats – Ending the trade
Join more than 21,000 Big Cat Rescue fans http://www.facebook.com/pages/Big-Cat-Rescue-Tampa-FL/122174836956?ref=ts
Twitter: Follow Me and get a free wild cat screen saver or ecard account @BigCatRescue
Show Comments (0) |
IQ tests are conducted to gauge the cognitive ability of an individual. The IQ scale range is the spectrum of intelligence on which the person falls after taking the test.
Throughout history, there have been many attempts to quantify the intellectual prowess of human beings. Like other things we do to categorize people (tall/short, religious/atheist, white collar/blue collar), this metric of intelligence is intended to classify people on the basis of their cognitive abilities.
The most famous tool used today to gauge a person’s intellect is ‘Intellectual Quotient’ (IQ); consequently, the way this method categorizes various levels of intelligence is the ‘IQ Scale Range’.
This range tells us where a person’s cognitive abilities lie in the total possible human intelligence spectrum and their intellectual abilities relative to other individuals. However, this is not a static range, meaning that different kinds of IQ tests have different ranges, and even within a single type of IQ test, the ranges keep being updated.
Why is this the case? Let’s find out!
IQ and our intelligence
Sir Francis Galton was the first person to derive a modern test for intelligence in 1882. He did this by measuring visual acuity and hearing in his lab. His method was further explored by James McKeen Cattell, who studied the accuracy and speed of children’s perception by measuring their academic abilities.
Cattell’s method proved insufficient, and a better test was designed by Alfred Binet in 1905. His test is considered the first modern-day IQ test, which saw him taking a different approach than his predecessors. He gauged the aptitude of children by giving them both knowledge-based and simple reasoning-based questions. Furthermore, he mapped out the questions based on the age of the children and evaluated their intelligence on the basis of their knowledge as compared to others of their age.
For example, if a ten-year-old child could answer the questions that children who are 12 years old would be able to answer, the younger child was considered to have the mental age of a 12-year-old. His intelligence was then assessed by subtracting his physical age from his mental age, which is 12-10 = 2, meaning that the child was 2 years ahead of the average intelligence of his/her age.
This was further iterated by William Stern, who divided the two ages, rather than subtracting them, thus giving us the ‘intellectual quotient’. For the case above, this would be 12/10 = 1.2. Lewis Terman added the final piece to this quotient by multiplying the final output by 100, which is a familiar number we are accustomed to today when we talk about IQ; thus, 1.2 becomes (1.2 x 100) = 120.
This test was converted for adults by Donald Wechsler, who studied the cognitive performance of many adults and the subsequent distribution of their scores. The mean average score of an individual came out to be 100, much like the average of a child of a certain age. This test was later used to gauge the intelligence of a group of people, and was first tried by the US army.
What is the IQ Scale?
IQ is the measure of your intelligence, a testament to your cognitive abilities and your intelligence with respect to the average IQ of the general populace. Your score above or below the average IQ will dictate where you are on the spectrum, so the spectrum of human intelligence is denoted by the IQ Scale.
If a person’s mental age matches their physical age, the IQ will always be 100, which is considered the average IQ. If an individual’s mental age is above the physical age (> 100), they have above-average IQ, and if their mental age is below the physical age ( < 100 ), they have below-average IQ.
Although there are multiple IQ tests in use today, they give similar insights about the intelligence of people collectively. When the IQ scores of a sizable group of people are plotted on a graph, it comes out much like a bell curve. This indicates that the majority of people fall into the average bracket of about 86 – 116 IQ. The graph seems to diminish towards both ends with below- and above-average scores on either side.
The Stanford-Binet test and the Wechsler Intelligence Scales are two of the most widely used IQ tests today. Let’s take a look at them and their subsequent IQ scales.
As described earlier, this test was devised to understand the abilities of students and why some students tend to lag behind in their peer group. According to the 5th edition of the test, the IQ scale range is as follows:
- IQ score of 176 – 225: Profoundly gifted or Profoundly advanced
- IQ score of 161 – 175: Extremely gifted or Extremely advanced
- IQ score of 130 – 144: Gifted or very advanced
- IQ score of 120 – 129: Superior
- IQ score of 110 – 119: High Average
- IQ score of 90 – 109: Average
- IQ score of 80 – 89: Low Average
- IQ score of 70 – 79: Borderline impaired or Delayed
- IQ score of 55 – 69: Mildly impaired or Delayed
- IQ score of 40 – 54: Moderately impaired or Delayed
The Wechsler Intelligence Scales
As noted earlier, Donald Wechsler made a working IQ test for adults in 1939 through his Wechsler-Bellevue Intelligence Scale. Since then, the scales have been updated three times, as the criteria for gauging intelligence through this method is repeatedly iterated.
The test was initially published in 1955 and went through its first revision in 1981. The third revision, which is called the (WAIS-III), was made in 1997 and is used to test the intelligence of adults aged 16 and above. The IQ scale range of this iteration is as follows:
- IQ score of 130 and above: Gifted
- IQ score of 120-129: Very High
- IQ score of 110-119: Bright Normal
- IQ score of 90-109: Average
- IQ score of 85-89: Low Average
- IQ score of 70-84: Borderline Mental Functioning
- IQ score of 50-69: Mild Mental Retardation
- IQ score of 35-49: Moderate Retardation
- IQ score of 20-34: Severe Retardation
- IQ score of 20 and below: Profound Retardation
The test has been further revised and published in 2008. The IQ scale has been updated too and is as follows:
- IQ score of 130 and above: Very Superior
- IQ score of 120-129: Superior
- IQ score of 110-119: High Average
- IQ score of 90-109: Average
- IQ score of 80-89: Low Average
- IQ score of 70-79: Borderline
- IQ score of 69 and below: Extremely low
Although IQ tests are a good way to test someone’s intelligence and see where they lie in the spectrum of human intelligence, they are not the ultimate metric of human cognition.
There are many factors these tests don’t include, including a person’s culture, the environment they were brought up in, the level of their physical fitness, and whether they are free of crippling diseases, among many other factors. All these points should also be considered when a person is being categorized.
I would suggest taking an IQ test for yourself. It’s a fun way to see your abilities and it makes you exercise your logical, visual and aesthetic abilities. Here are a few great links where you can access such tests! |
Technology has been part of many aspects of our lives and, as expected, education could not help being part of it. Following a constant evolution, the incidence of technology in this branch has led to the creation of what is known today as educational technology.
In this way, educational technology refers to the incorporation of ICTs or Information and Communication Technologies in educational areas to support the teaching and learning process. According to UNESCO, it consists of the systematic way of conceiving, applying and evaluating the teaching and learning processes as a whole, recognizing the technical and human resources and the interactions between them.
Currently, there are many schools that develop educational programs based on technologies and this is due, in large measure, to all the advances that it has had in conjunction with the great benefits that it can bring. Among them, are the following:
- Access to technology: Despite the existence of new mobile devices and various types of computers, not all people have access to technology. The implementation of educational technology allows all students, regardless of their economic or social status, to come into contact with everything they have to offer, to manage equipment such as computers and to learn about the use of the Internet.
- Greater scope for information: The Internet has opened the possibility of finding a larger amount of information more immediately. In this sense, students should not spend hours searching in encyclopedias to find what they need to know since only one click can locate thousands of websites with useful content for their study.
- Computer-assisted teaching (EAO): Advances in technology in education have allowed the development of tools such as IACs (Computer-Aided Instruction or Computer-Assisted Instruction), a type of educational program or software specifically designed to Assist the teaching processes. Its functionality is special since it has a set of tools adapted to the different learning methodologies. In this way, they use verifiable exercises, verifiable content and dynamic activities aimed at checking if the student is understanding the class.
In addition, the use of ICTs in classrooms can be beneficial in certain areas, for example…
- A computer is patient. Students feel more confident towards a machine that waits patiently for their answers and is unable to judge them.
- New technologies save time and money in education. Online schools are more economically accessible than a normal school.
- Online content and virtual classrooms promote collaborative learning. Through tutoring and the student-student relationship, each student is aware of the importance of working as a team to achieve an understanding of a topic.
- Educational technologies allow each person to learn at their own pace and skills.
However, despite all the advantages that can be obtained from them, educational technologies face many challenges; among those that can be mentioned:
- Anti-pedagogical use: It may happen that, in a classroom, students do not pay attention to what the teacher is explaining and use the Internet or computers to perform activities not related to the learning process. They can play online, watch videos, read articles on blogs and much more.
- Little understanding of the educational benefits of technology: This point is a bit related to the previous one. Many people see technology as something that is there to entertain and fail to understand how important it is on a professional or educational level.
- Limited economic resources: There are schools that do not have the necessary resources to implement the use of new technologies and develop strategies to take advantage of them.
- Rejection for ignorance: There are those who do not accept technology within the most common aspects of their daily lives, as is the case of education since they do not know it and are not familiar with it.
Nonetheless, more and more governments and authorities are paying attention to this sector, which is why campaigns have been created that are able to know the advantages of educational technology and, at the same time, manage to face all the challenges that may arise. Thus, such a useful tool can become part of the basis of the intellectual and personal growth of each human being. |
Introduction Each of the Channel Islands is home to an endemic subspecies of deer mouse that is found nowhere else on earth. In some cases, island deer mice are the only terrestrial mammal occuring on the island. They thrive in the varied habitats on the islands, with populations densities that are higher than anywhere else in the world. The island deer mouse population is an important part in the ecosystem, as food for predators, and as consumers of seeds-including those of non-native plants.
Quick and Cool Facts
Deer mice are widely spread across North America; however each of the five Channel Islands has its own distinct sub-species.
Deer mice populations at Channel Islands National Park have been monitored since 1992.
Island deer mouse population densities are higher, and fluctuate more, than anywhere else in the world.
Deer mice exhibit population dynamics that differ markedly from mainland populations of the same species.
Climate and predation (by island foxes on some islands, and owls on others) are the most important factors determining island deer mouse population fluctuations.
Generally, more rain provides more food for the species, yet abundant rain with cold winters may in fact decrease the deer mouse population.
Appearance Deer mice from the Channel Islands have dark brownish-black tipped hairs on their back, a buffy band on their sides and then grey at the base. In general, island mice are darker than mainland mice. Deer mice are the prototype for "field mice" with large, bulging eyes, big ears, a bi-colored pattern and a long tail.
Range The deer mouse Peromyscus maniculatus is the most widespread North American rodent. It is found throughout southern Canada, the United States, and north and central Mexico, including Baja California. It is most commonly called the deer mouse, although that name is common to most species of Peromyscus. It is absent from the Atlantic and Gulf of Mexico coastal plains of the United States, but its range does extend to the coast in east Texas. However, each of the Channel Islands are home to a different subspecies of deer mouse that is found nowhere else on earth.
Habitat Due to the diverse habitat of the Channel Islands, nesting is found in the natural cover of the landscape. The deer mouse will nest alone on most occasions but will sometimes nest with a deer mouse of the opposite sex. The deer mouse is generally a nocturnal creature.
Feeding Food selection is dependent on both habitat and season. Deer mice feed heavily on larvae from lepidopterans (includes moths and butterflies) and other insects in the spring. They can eat large volumes and are capable of ridding an area of many insects that may be detrimental to trees. In the fall, seeds become a major food source and are stored in caches for use during the winter.
Island studies have shown that deer mice occasionally prey on eggs and nestlings of Scripps's murrelets on Santa Barbara Island. However, the annual reproductive success of murrelets was not related to deer mouse densities. Likewise, research on San Miguel Island showed that as seed predators, deer mice had limited impacts on giant coreopsis populations, especially when compared to the negative competitive effects of non-native annual grasses.
Reproduction Deer mice on the Channel Islands have been found to show definite breeding seasons, with the majority of reproduction occurring during the spring and summer months with generally two litters produced. The normal gestation period range is from 22-35 days, with the average being 26 days, and mice are considered in their juvenile pelage from birth to 11 weeks, while sub adults are from 11-21 weeks.
Park research has shown that island deer mouse population densities are higher than anywhere else in the world. However, population dynamics on different islands vary in response to numerous factors, including predator diversity, vegetation community structure, and climate.
For example, monitoring data shows that deer mouse densities on San Miguel Island are strongly limited by the endangered island fox (Urocyon littoralis littoralis), whereas on Santa Barbara Island, where there are no foxes, mouse densities are much more variable. Predation by barnowls on Santa Barbara Island can drive th emousr population to extreme lows. Unlike the generalist island fox, barn owls are more specialized predators and do not switch to other prey species when their their primary prey (mice) decline.
In addition, research has revealed that rainfall is a strong driver of deer mouse population dynamics. High winter rainfall encourages plant growth which provides food resources, while drought reduces plant growth and limits mouse productivity. However, abundant winter rain combined with cold temperatures may actually increase winter mortality and reduce the number of mice that survive from fall to spring.
Conservation Status The island deer mouse, peromyscus maniculatus, is listed by the IUCN Red List of Threatened Species as Least Concern in view of its wide distribution, presumed large population, tolerance of a broad range of habitats, occurrence in a number of protected areas, and because it is unlikely to be in decline.
Deer mice populations at Channel Islands National Park have been monitored since 1992. Nine permanent grids have been established in various habitats on Anacapa, San Miguel and Santa Barbara Islands. These grids are sampled twice each year during the spring and fall seasons and mark/recapture methods are used to determine population densities.
The objective of the monitoring is to identify trends in the deer mice population, evaluate the general health of the population utilizing weight, age, sex, and reproductive information and to increase our understanding of how island ecosystems respond to changing environmental conditions. |
Atoms are most stable in the ground state. An atom is considered to be "ground" when every electron in the outermost shell has a complimentary electron that spins in the opposite direction. By definition, a free radical is any atom (e.g. oxygen, nitrogen) with at least one unpaired electron in the outermost shell, and is capable of independent existence (13). A free radical is easily formed when a covalent bond between entities is broken and one electron remains with each newly formed atom (13).
Free radicals are highly reactive due to the presence of unpaired electron(s). The following literature review addresses only radicals with an oxygen center. Any free radical involving oxygen can be referred to as reactive oxygen species (ROS). Oxygen centered free radicals contain two unpaired electrons in the outer shell. When free radicals steal an electron from a surrounding compound or molecule, a new free radical is formed in its place. In turn, the newly formed radical then looks to return to its ground state by stealing electrons with antiparallel spins from cellular structures or molecules. Thus the chain reaction continues and can be "thousands of events long." (7). The electron transport chain (ETC), which is found in the inner mitochondrial membrane, utilizes oxygen to generate energy in the form of adenosine triphosphate (ATP). Oxygen acts as the terminal electron acceptor within the ETC. The literature suggests that anywhere from 2 to 5% (14) of the total oxygen intake during both rest and exercise have the ability to form the highly damaging superoxide radical via electron escape. During exercise, oxygen consumption increases 10 to 20 fold to 35-70 ml/kg/min. In turn, electron escape from the ETC is further enhanced. Thus, when calculated, .6 to 3.5 ml/kg/min of the total oxygen intake during exercise has the ability to form free radicals (4). Electrons appear to escape from the ETS at the ubiqunone-cytochrome c level (14).
Polyunsaturated fatty acids (PUFAs) are abundant in cellular membranes and in low-density lipoproteins (LDL) (4). The PUFAs allow for fluidity of cellular membranes. A free radical prefers to steal electrons from the lipid membrane of a cell, initiating a free radical attack on the cell known as lipid peroxidation. Reactive oxygen species target the carbon-carbon double bond of polyunsaturated fatty acids. The double bond on the carbon weakens the carbon-hydrogen bond allowing for easy dissociation of the hydrogen by a free radical. A free radical will steal the single electron from the hydrogen associated with the carbon at the double bond. In turn, this leaves the carbon with an unpaired electron and hence, becomes a free radical. In an effort to stabilize the carbon-centered free radical, molecular rearrangement occurs. The newly arranged molecule is called a conjugated diene (CD). The CD then very easily reacts with oxygen to form a proxy radical. The proxy radical steals an electron from another lipid molecule in a process called propagation. This process then continues in a chain reaction (9)
There are numerous types of free radicals that can be formed within the body. This web site is only concerned with the oxygen centered free radicals or ROS. The most common ROS include: the superoxide anion (O2-), the hydroxyl radical (OH ·), singlet oxygen (1O2 ), and hydrogen peroxide (H2O2) Superoxide anions are formed when oxygen (O2) acquires an additional electron, leaving the molecule with only one unpaired electron. Within the mitochondria, O2- · is continuously being formed. The rate of formation depends on the amount of oxygen flowing through the mitochondria at any given time. Hydroxyl radicals are short-lived, but the most damaging radicals within the body. This type of free radical can be formed from O2- and H2O2 via the Harber-Weiss reaction. The interaction of copper or iron and H2O2 also produce OH · as first observed by Fenton. These reactions are significant as the substrates are found within the body and could easily interact (9). Hydrogen peroxide is produced in vivo by many reactions. Hydrogen peroxide is unique in that, it can be converted to the highly damaging hydroxyl radical or be catalyzed and excreted harmlessly as water. Glutathione peroxidase is essential for the conversion of glutathione to oxidized glutathione, during which H2O2 is converted to water (2). If H2O2 is not converted into water, 1O2 is formed. Singlet oxygen is not a free radical, but can be formed during radical reactions and also cause further reactions. Singlet oxygen violates Hund's rule of electron filling in that, it has eight outer electrons existing in pairs, leaving one orbital of the same energy level empty. When oxygen is energetically excited one of the electrons can jump to empty orbital creating unpaired electrons (13). Singlet oxygen can then transfer the energy to a new molecule and act as a catalyst for free radical formation. The molecule can also interact with other molecules leading to the formation of a new free radical.
All transition metals, with the exception of copper, contain one electron in their outermost shell and can be considered free radicals. Copper has a full outer shell, but loses and gains electrons very easily making itself a free radical (9). In addition, iron has the ability to gain and lose electrons (i.e. (Fe2+«Fe3+) very easily. This property makes iron and copper two common catalysts of oxidation reactions. Iron is major component of red blood cells (RBC). A possible hypothesis is that, the stress encountered during this process may break down RBC releasing free iron. The release of iron can be detrimental to cellular membranes because of the pro-oxidation effects it can have. Zinc only exists in one valence (Zn2+) and does not catalyze free radical formation. Zinc may actually act to stop radical formation by displacing those metals that do have more than one valence.
Free radicals have a very short half-life, which makes them very hard to measure in the laboratory. Multiple methods of measurement are available today, each with their own benefits and limits. Radicals can be measured using electron spin resonance and spin trapping methods. The methods are both very sophisticated and can trap even the shortest-lived free radical. Exogenous compounds with a high affinity for free radicals (i.e. xenobiotics) are utilized in the spin techniques. The compound and radical together, form a stable entity that can be easily measured. This indirect approach has been termed "fingerprinting." (12). However, this method is not 100% accurate. Spin-trapping collection techniques have poor sensitivity, which can skew results (1) A commonly used alternate approach measures markers of free radicals rather than the actual radical. These markers of oxidative stress are measured using a variety of different assays. These assays are described below. When a fatty acid is peroxidized, it is broken down into aldehydes, which are excreted. Aldehydes such as thiobarbituric acid reacting substances (TBARS) have been widely accepted as a general marker of free radical production (3). The most commonly measured TBARS is malondialdehyde (MDA) (13). The TBA test has been challenged because of its lack of specificity, sensitivity, and reproducibility. The use of liquid chromatography instead spectrophotometer techniques help reduce these errors (15). In addition, the test seems to work best when applied to membrane systems such as microsomes (8). Gases such as pentane and ethane are also created as lipid peroxidation occurs. These gases are expired and commonly measured during free radical research (13). Dillard et al. (6) was one of the first to determine that expired pentane increased as VO2 max increased. Kanter et al. (11) has reported that serum MDA levels correlated closely with blood levels of creatine kinase, an indicator of muscle damage. Lastly, conjugated dienes (CD) are often measured as indicators of free radical production. Oxidation of unsaturated fatty acids results in the formation of CD. The CD formed are measured and provide a marker of the early stages of lipid peroxidation (9). A newly developed technique for measuring free radical production shows promise in producing more valid results. The technique uses monoclonal antibodies and may prove to be the most accurate measurement of free radicals. However, until further more reliable techniques are established, it is generally accepted that two or more assays be utilized whenever possible to enhance validity (9).
Under normal conditions (at rest), the antioxidant defense system within the body can easily handle free radicals that are produced. During times of increased oxygen flux (i.e. exercise) free radical production may exceed that of removal ultimately resulting in lipid peroxidation. Free radicals have been implicated as playing a role in the etiology of cardiovascular disease, cancer, Alzheimer's disease, and Parkinson's disease. While worthy of a discussion, these conditions are not the focus of the current literature review. This literature review will only examine the current literature addressing the relationship between free radicals and exercise, which is introduced below. The driving force behind these topics is lipid peroxidation. By preventing or controlling lipid peroxidation, the concomitant effects discussed below would be better controlled.
Oxygen consumption greatly increases during exercise, which leads to increased free radical production. The body counters the increase in free radical production through the antioxidant defense system. When free radical production exceeds clearance, oxidative damage occurs. Free radicals formed during chronic exercise may exceed the protective capacity of the antioxidant defense system, thereby making the body more susceptible to disease and injury. Therefore, the need for antioxidant supplementation is discussed.
A free radical attack on a membrane, usually damages a cell to the point that it must be removed by the immune system. If free radical formation and attack are not controlled within the muscle during exercise, a large quantity of muscle could easily be damaged. Damaged muscle could in turn inhibit performance by the induction of fatigue. The role individual antioxidants have in inhibiting this damage has been addressed within the review of the four antioxidants that follows.
One of the first steps in recovery from exercise induced muscle damage is an acute inflammatory response at the site of muscle damage. Free radicals are commonly associated with the inflammatory response and are hypothesized to be greatest twenty-four hours after completion of a strenuous exercise session. If this theory were valid, then antioxidants would play a major role in helping prevent this damage. However, if antioxidant defense systems are inadequate or not elevated during the post-exercise infiltration period, free radicals could further damage muscle beyond that acquired during exercise. This in turn would increase the time needed to recover from an exercise bout.
This section has focused only on the negatives associated with free radical production. However, free radicals are naturally produced by some systems within the body and have beneficial effects that cannot be overlooked. The immune system is the main body system that utilizes free radicals. Foreign invaders or damaged tissue is marked with free radicals by the immune system. This allows for determination of which tissue need to be removed from the body. Because of this, some question the need for antioxidant supplementation, as they believe supplementation can actually decrease the effectiveness of the immune system.
Antioxidant means "against oxidation." Antioxidants work to protect lipids from peroxidation by radicals. Antioxidants are effective because they are willing to give up their own electrons to free radicals. When a free radical gains the electron from an antioxidant, it no longer needs to attack the cell and the chain reaction of oxidation is broken (4). After donating an electron, an antioxidant becomes a free radical by definition. Antioxidants in this state are not harmful because they have the ability to accommodate the change in electrons without becoming reactive. The human body has an elaborate antioxidant defense system. Antioxidants are manufactured within the body and can also be extracted from the food humans eat such as fruits, vegetables, seeds, nuts, meats, and oil. There are two lines of antioxidant defense within the cell. The first line, found in the fat-soluble cellular membrane consists of vitamin E, beta-carotene, and coenzyme Q (10). Of these, vitamin E is considered the most potent chain breaking antioxidant within the membrane of the cell. Inside the cell, water soluble antioxidant scavengers are present. These include vitamin C, glutathione peroxidase, superoxide dismutase (SD), and catalase (4). Only those antioxidants that are commonly supplemented (vitamins A, C, E and the mineral selenium) are addressed in the literature review that follows.
- Acworth, I.N., and B. Bailey. Reactive Oxygen Species. In: The handbook of oxidative metabolism. Massachusetts: ESA Inc., 1997, p. 1-1 to 4-4.
- Alessio, H.M., and E.R. Blasi. Physical activity as a natural antioxidant booster and its effect on a healthy lifestyle. Res. Q. Exerc. Sport. 68 (4): 292-302, 1997. [Abstract]
- Clarkson P. M. Antioxidants and physical performance. Crit.Rev. Food Sci. Nutr. 35: 131-141, 1995. [Abstract]
- Dekkers, J. C., L. J. P. van Doornen, and Han C. G. Kemper. The Role of Antioxidant Vitamins and Enzymes in the Prevention of Exercise-Induced Muscle Damage. Sports Med 21: 213-238, 1996. [Abstract]
- Del Mastero, R.F. An approach to free radicals in medicine and biology. Acta. Phyiol. Scand. 492: 153-168, 1980.
- Dillard, C.J., R.E. Litov, W.M. Savin, E.E. Dumelin, and A.L. Tappel. Effects of exercise, vitamin E, and ozone on pulmonary function and lipid peroxidation. J. Appl. Physiol. 45: 927, 1978. [Abstract]
- Goldfarb, A. H. Nutritional antioxidants as therapeutic and preventive modalities in exercise-induced muscle damage. Can. J. Appl. Physiol. 24: 249-266, 1999. [Abstract]
- Halliwell, B., and S. Chirico. Lipid peroxidation: Its mechanism, measurement, and significance. Am. J. Clin. Nutr. 57: 715S-725S, 1993. [Abstract]
- Halliwell, B., and J.M.C. Gutteridge. The chemistry of oxygen radicals and other oxygen-derived species. In: Free Radicals in Biology and Medicine. New York: Oxford University Press, 1985, p. 20-64.
- Kaczmarski, M., J. Wojicicki, L. Samochowiee, T. Dutkiewicz, and Z. Sych. The influence of exogenous antioxidants and physical exercise on some parameters associated with production and removal of free radicals. Pharmazie 54: 303-306, 1999. [Abstract]
- Kanter, M.M., G.R. Lesmes, L.A. Kaminsky, J. LaHam-Saeger, and N.D. Nequin. Serum creatine kinase and lactate dehydrogenase changes following an eighty-kilometer race. Eur. J. Appl. Phsyiol. 57: 60-65, 1988. [Abstract]
- Karlsson J. Exercise, muscle metabolism and the antioxidant defense. World Rev Nutr Diet. 82:81-100, 1997. [Abstract]
- Karlsson, J. Introduction to Nutraology and Radical Formation. In: Antioxidants and Exercise. Illinois: Human Kinetics Press, 1997, p. 1-143.
- Sjodin, T., Y.H. Westing, and F.S. Apple. Biochemical mechanisms for oxygen free radical formation during exercise. Sports Med. 10: 236-254, 1990. [Abstract]
- Wong, S.H.Y., J.A. Knight, S.M. Hopfer, O. Zaharia, C.N. Leach, and F.W. Sunderman. Lipoperoxides in plasma as measured by liquid-chromatographic separation of malondialdehyde-thiobarbituric acid adduct. Clin. Chem. 33(2): 214-220, 1987. [Abstract] |
Dr. David Rottinghaus, Chief Medical Officer/Vice President of Medical Affairs at Butler Health System prepared the attached COVID-19 School FAQs sheet to give some insite.
What is the difference between Influenza (Flu) and COVID-19? COVID-19 has no established immunity in humans since it is new to the human race. Therefore, everyone can get it. Influenza (Flu) and COVID-19 are both contagious respiratory illnesses, but they are caused by different viruses. COVID-19 is caused by infection with a new coronavirus (called SARS-CoV-2) and flu is caused by infection with influenza viruses. Because some of the symptoms of flu and COVID-19 are similar, it may be hard to tell the difference between them based on symptoms alone, and testing may be needed to help confirm a diagnosis. Flu and COVID-19 share many characteristics, but there are some key differences between the two. While more is learned every day, there is still a lot that is unknown about COVID-19 and the virus that causes it.
For additional information: https://www.cdc.gov/flu/symptoms/flu-vs-covid19.htm
Why is Self-Monitoring Important? This will be vital to containing the spread of COVID-19. Reporting symptoms is important and reporting positive results is equally important. There needs to be openness to reporting and this information treated in a non-judgmental fashion. Complicating self-monitoring is the belief that up to 40% of people that get the virus may not have symptoms and can still spread the virus.
When Should Students/Staff Stay Home? When they feel sick or have close contact with an individual that is ill or tests positive.
Clarification of quarantine needs:
• If tested positive, who stays home? The person that tested positive and any close contacts.
• Who returns and when? Per guidelines, a person testing positive may return after 10 days of quarantine IF not having any symptoms for 24 hours and reaching 10 days.
• If tested negative, when can one return? If they had a close contact, guidelines are to remain at home for total of 14 days despite a negative test.
• If related to someone who has tested positive for COVID-19, what are recommended quarantine guidelines/restrictions? Same as a close contact - 14 days.
There are some exceptions: Essential workers can return if they follow the proper precautions for masking, distancing, hand hygiene and they are without symptoms. If symptoms develop, they must quarantine immediately.
Will the whole classroom have to quarantine if a student tests positive or a member of a student’s immediate family tests positive? Follow the close contact guidelines. See below:
At what point do you recommend the entire school quarantine? This will probably be evident when a large number of staff/students need to quarantine and attendance is so low that remaining open is not justified. Any options to remotely learn/attend class should be considered.
If there is a positive case in one of my children’s classrooms, will all of my other children need to be at home, or just that one child? If the child in the classroom is a close contact, they will need to stay home. The other children would be secondary contacts by definition and would not have to stay home.
Definition of Close Contact and Secondary Contact (for both school and home situations): Close contact is a person that spends 15 minutes or more within 6 feet of another person with COVID-19. A secondary contact would be a friend or family member or that close contact. In general, guidance for secondary contacts is to monitor for symptoms, but not self-quarantine.
If traveling to Ohio or similar nearby state to visit a parent on a regular basis (divorced family type of situation with one parent being out of state, and student goes there every other weekend or once a month), does my child need to quarantine before returning to school each time? There are hot-spots and areas with higher community spread and high rates of positive cases. This should be considered on a case by case basis. What people do and how they behave both at home and away determines their risk. For example, frequenting restaurants and gathering with large groups of people anywhere is high-risk behavior.
If families go on vacation during the school year, do you recommend quarantine? If so, how long? See chart above-depends on the location and situation.
If my child has just a sinus or ear infection, when can they return to school? This is going to prove very difficult as the year progresses and other seasonal viruses and illnesses begin to circulate. Assessment by a pediatrician or family physician will be valuable. Hopefully testing for COVID and other viruses will be more widely available in the fall and winter.
Are there longer recommended stay-at-home times for children that have illnesses that are not COVID than there may have been in previous years? If the illness can be determined to not be COVID and, instead, another diagnosis, then the stay at home times will be the same as those illnesses were in the past, such as seasonal influenza.
If all students and staff are required to mask but someone has a medical exemption, does this violate HIPAA regulations or protections? Keeping the diagnosis or reason for not being able to wear a mask confidential should protect any school against HIPAA. Very few, if any, individuals cannot wear a mask. An alternative to mask should be face shields. They do not restrict breathing and are highly protective if the shield comes down over the person’s mouth.
What information are we allowed and/or required to release to families and staff about persons who are exempted from masks? HIPAA would not allow you to release the specific reason why a mask cannot be worn. A general statement saying that those individuals must wear a face shield is probably good. Face shields may be a very attractive alternative to masks for staff and students. |
Making Education More Interactive With Game Based Learning
In this age of advanced technology, children are constantly exposed to digital media for various purposes and reasons. Nowadays, children spend longer hours playing games on either a smartphone, computer, tablet, or even gaming consoles, such as Xbox and PlayStation. Thus, this has rewired their thinking processes and attention spans testtest testtest testtest testtest testtest.When children attend school, they find it difficult to respond to the traditional, instructor-led environment. Since these kids are so accustomed to interactive media, these traditional methods for a learning environment do not reap the desired results. Owing to this situation, it is only wise for educators to implement game-based learning to facilitate the learning procedure in a much better manner:
So how can game-based learning help make education more interactive?
1 .It is highly engagingWhen a child is playing a game, his or her whole concentration is on the task at hand. When a game hits the right balance between subject matter and gaming elements, the child is completely immersed in learning while playing the game. Most games have various levels to be cleared, reward points to be won, or a way to play against other virtual players, which automatically motivate the child to excel. by Abrams Learning, developed by Magic, is a great example of such learning.
2. It helps children make mistakes – and learn from those mistakesIn a classroom setting, a child could be scared to ask questions for the fear of ridicule/rejection coming from the teacher or other students of that class. This could lead to the accumulation of doubts towards a certain subject in the child’s mind. But with game-based learning, a child is free from any form of negative feedback that could deter his/her confidence, paving the path for learning by trial-and-error. is another product that we worked on that would allow for such feedback. It allows entire curriculum to be delivered online, tracking growth and learning along the way.
3. The learning pace is tailored to individual studentsIt’s not a surprise that every individual is called so, because of their uniqueness with respect to several behavioural traits. Similarly, children process and apply the information provided to them at different speeds. In a traditional setting, a child could be left behind if he or she is not able to learn fast enough. But, when the same child is playing a game, he or she can monitor the track record of their performance at a game, enabling the child to comfortably learn at his or her own pace.
4. Easy transfer of knowledgeGame-based learning uses relevant and comfortable realistic situations to enable the transfer of knowledge for students. For example, learning about biology in a garden and interacting with characters who are like their friends, among other aspects. This facilitates the learning process by making it more interactive and relatable for students. The DLM content build for McGraw Hill in an an example of how merging learning styles can lead to more engagement.
Well-designed game-based learning procedures ensure that education is more fun, challenging, and interactive for students. They will also be instrumental in bringing significant learning returns. The ultimate goal is to create an environment where learning is embraced rather than viewed as a disruptive burden for students. |
A substance in young mice's tears makes female mice more likely to reject male sexual advances. This research is part of ongoing efforts to understand how animals communicate using chemicals called pheromones.
Direct connections between human and mouse behavior cannot be made because pheromones are highly species specific.
"If humans can detect anything in tears, we won't use the same pheromone signal or receptor as mice. But we are investigating if species share the basic neurocircuitry of how the brain processes an olfactory signal to affect behavior," said the leader of the research project.
Researchers hope to use the tear pheromone as a natural mouse birth control to reduce mouse populations in the future.
"It is unlikely that other animals would be affected because pheromones are so species specific. The sex-rejecting behavior is an innate instinct, so it's also unlikely that the mice will learn to change their behavior or ignore the artificial pheromone," said the author.
Only juvenile mice aged one to three weeks produce the pheromone, called exocrine gland-secreting peptide 22 (ESP22). ESP22 is not airborne and lacks a noticeable odor, but the pheromone spreads around the territory as mothers and young mice wipe tears while grooming.
Both mothers and virgin female mice reject male sexual advances after exposure to ESP22. Less female interest in sex would theoretically benefit juvenile mice by reducing the number of younger siblings competing for resources. ESP22 activates a dedicated vomeronasal receptor, V2Rp4, and V2Rp4 knockout eliminates ESP22 effects on sexual behavior.
"ESP22 is difficult to artificially synthesize, so we want to find a smaller portion of the pheromone molecule that could be added to mouse drinking water. This could prevent mice breeding in areas where they are pests," said the author.
ESP1 is an adult male pheromone that research group has previously studied for its role in enhancing female acceptance of sex. In this new study, researchers tracked how ESP22 and ESP1 are received and processed by the adult female mouse brain..
Pheromone signals from young mice overrode the signals from adult males. Virgin female mice rejected male advances when they were exposed to the sex-rejecting ESP22 even after being exposed to the sex-accepting ESP1. V2Rp4 counteracts a highly related vomeronasal receptor, V2Rp5, that detects the male sex pheromone ESP1.
Both sex-rejecting ESP22 and sex-accepting ESP1 pheromones are recognized by single, dedicated receptors in the nose. Specific neurons send the different pheromone signals to the brain. The presence of similar but specific ESP1 and ESP22 receptors helps reveal how animals evolved the ability to detect and interpret pheromone signals. Interestingly, V2Rp4 and V2Rp5 are encoded by adjacent genes, yet couple to distinct circuits and mediate opposing effects on female sexual behavior.
"The discovery of only one receptor for each pheromone shows us that single molecules can drastically affect animal behavior," said the senior author.
Pheromone signals are routed to the medial amygdala, a small group of neurons in the brain.
"The medial amygdala is like a hub to receive and reroute pheromone signals," said a co-first author of the research paper.
ESP22 and ESP1 signals travel separately but in parallel until reaching the medial amygdala. After that point, the pheromones affect different neurocircuitry in the brain to create different behaviors. Ongoing research in the laboratory will explore pheromone-related neurocircuitry beyond the medial amygdala hub.
Baby's tears and mom's libido
- 465 views |
There are over 170 species of mosquitoes in North America alone. They belong to the same family as flies do, although flies don’t bite.
Mosquitos range from three millimeters and nine millimeters. They have a single pair of wings, like flies. They have narrow, oval bodies and are pale brown with whitish stripes across the abdomen. They have six legs and a proboscis.
Mosquitos live most often in moist soil or stagnant water sources, such as storm drains, old tires, kiddy pools, and birdbaths. They are typically an outdoor problem and are normally not found indoors. When trapped inside, they congregate in dark, hidden corners of the house and will come out at night.
Male Mosquitos seek out female mosquitos using their feathery antennas. After mating, females seek out a meal to aid in the process of egg production. They typically lay their eggs in standing pools of waters, but birdbaths, buckets, and mud puddles will do in a pinch. Females can lay as many as 100 eggs at a time. The larvae are wormlike and are called wrigglers. Larvae eat mostly aquatic organisms but have been known to eat other mosquitos. They feed until ready to pupate. The pupae are called tumblers. After adults emerge, they get out of the water and their exoskeleton hardens.
Photo Credit Wikipedia
It’s a well-known fact that mosquitos bite, but it actually isn’t true. Mosquitos can’t bite. They use their proboscis to pierce the skin and suck blood. Also, only the females do it. Male mosquitos feed entirely on plant nectar. They spread diseases including Zika, West Nile Virus, malaria, dengue fever, and several types of encephalitis.
Fleming Lawn and Pest Services Mosquito control is designed to target all living and breeding areas on your property. We use the latest and most effective equipment to deliver maximum results. |
Mosquitoes are the deadliest animals on the planet, killing at least 725,000 people every year by passing diseases through their bites, according to the Barcelona Institute for Public Health. The US Department of Defense is interested in a nonpesticide mosquito deterrent for its personnel, and a team of researchers from Brown University has possibly found one. At the ACS national meeting in San Diego last week, Cintia J. Castilho reported that a 1 µm thick layer of graphene oxide (GO) physically and chemically deters Aedes aegypti—the mosquito that carries yellow fever. The researchers expected that mosquitoes could not physically bite a person through the dry GO layer, Castilho said. However, they were surprised that the mosquitoes didn’t land as much on dry GO patches placed on people’s arms. This finding suggests that the thin layer is impermeable to chemical attractants released by people, such as CO2, while still remaining breathable by letting water pass through. Mosquitoes can’t physically bite through the dry GO layer, but its impenetrability breaks down as it gets wet. When the researchers reduce the GO, it remains bite-proof but is no longer breathable. The researchers think that in the future, a version of GO could be combined into light fabrics to make mosquito-proof clothing without the need for pesticides such as DEET or permethrin. |
Environmental Hazards and the El Niño climatic perturbation
How does an apparently local event in the Pacific affect the short-term climate and weather in other parts of the world? Julia Maxted examines the links between the 2015/16 El Niño and a variety of recent environmental hazard events across large areas of the globe.
El Niño is the term used for the period when sea surface temperatures are above normal off the South American coast along the equatorial Pacific. Every two to seven years, an unusually warm pool of water – sometimes two to three degrees Celsius higher than normal – develops across the eastern tropical Pacific Ocean to create a natural short-term climate change event. This warming of the ocean, known as El Niño not only affects the local aquatic environment but also causes global disruption in the general circulation of the Pacific Ocean and atmosphere. In turn, this spurs extreme weather patterns around the world, from flooding in the Americas to droughts in Australia and Southern Africa. Particularly strong El Niño periods are also now being investigated in connection with recent events further away from the tropics, such as the flooding in Northern England and parts of Scotland in the winter of 2015.
The meteorologist Tom K. Priddy has commented that Peruvian fishermen have long witnessed periodic changes in the location of fish species usually towards December (Scientific American, 20.10.97). The weather pattern emerges in the mid-Pacific but its effects are felt across large areas of the globe. Normally it is warmer in the western Pacific and cooler in the east next to South America and this pulls in air from the east, the so-called ‘trade winds’ which return in the upper atmosphere. The trade winds drive ocean surface currents toward the west along the equatorial Pacific. “Cold water, upwelling from deep ocean currents, provides nutrient-rich food for anchovy, the fishermen’s preferred catch” ().
El Niño occurs when the trade winds along the equatorial Pacific become reduced or calm for many weeks. Then the upwelling of cold, deep ocean waters slow or stops, allowing sea surface temperatures to increase much above normal in the east and central Pacific. The warm water drives the fish to deeper waters or farther away from usual fishing locations. This happens every 2 to 7 years, and as Prof. Adam Scaife of the UK Meteorological Office reports, can be identified through climate records going back to the late 19th century (BBC Inside Science, 7.1.16).
El Niño is Spanish for ‘the boy’ referring to the Christ child because El Niño peaks around Christmas. The event peaks in mid-winter and then turns over and declines slightly over the next six months back to zero – it takes several months for the ocean to release all the heat and so the impacts of an El Niño continue both in the tropics and elsewhere. While we have made quite good predictions of when it is going to emerge in the mid-Pacific, it is harder to work out how strong the event (measured by the rate of increase in ocean temperatures) will be. In May 2015, Prof. Adam Scaife of the Meteorological Office predicted that we would be heading for a strong El Nino event and data reveals that the 2015/16 event is likely to have equalled the strongest event since records began in the 1880s, that of 1997-8. Sea surface temperature anomalies can be of the order of 2 to 3 degree C above normal in many parts of the equatorial Pacific in a really big event, and the huge movement of warm water from the Philippines end of the Pacific Ocean across the middle and into the South American coast can raise the coastal sea level here by 40 centimetres.
Very warm waters in the equatorial Pacific pump more moisture into the air, causing an increase in showers, thunderstorms and tropical storms over a much larger area. The area affected can be so large and deep in the atmosphere that major upper air wind currents are affected. Since major wind currents steer the weather systems in the middle latitudes as well as the tropics, typical storm paths are shifted. Because of the shift of the rainfall pattern in the tropics, there was a very weak Indian monsoon and poor rainfall over the Ethiopian Highlands in 2015, reducing the flow of the Nile and increasing the possibility of drought. Places such as Australia, Indonesia, Brazil, India and Southern Africa can experience drought conditions because moisture-bearing storms are shifted away from these areas. “The risk of coral bleaching increases and populations of marine plants in the eastern tropical Pacific (and the animals that depend on them) sometimes crash”(NOAA: www.climate.gov/enso, accessed 14.4.16).
Boys play along with the banks of the Zambezi river in Mozambique. The country has been hit by flooding in the north and drought in the south. Source: John Wessels/AFP/Getty Images
We often see an intensification of forest fires during El Niño – and there have been examples of these during the current event in the Philippines and Indonesia. Likewise, Argentina, South China, Brazil and Japan can receive an increase in moisture-bearing storms that cause long periods of heavy rains and flooding. The 2015/16 El Niño system generated intense flooding across South America, with Paraguay, Argentina, Uruguay and Brazil experiencing the worst flooding in 50 years, resulting in the evacuation of more than 150,000 people. Additionally, there is a decrease of tropical storms (hurricanes) in the Gulf of Mexico and Western Atlantic and an increase of tropical storms in the Pacific.
El Niño global climatic patterns. Source: http://www.climate.gov.uk/enso
California is currently in the midst of a serious multi-year drought and the state’s reservoirs are dramatically below the historical average. El Niño could offer some respite because it brings rain but a flip from drought to floods and mudslides is a concern. Parched earth and areas that have experienced wildfires and burns are likely to be unstable or unable to absorb heavy rainfall. Bridges over the heavily channelized Los Angeles River provide shelter for many homeless people and this can suddenly fill with very fast moving water in a rainstorm (BBC Radio 4, Inside Science ,7.1.16)
Scientists are now increasingly turning to the question of whether it is it possible to correlate events happening in El Niño with what happens in Europe? The traditional view is that the Europe is just too remote for El Niño to have an effect. However in the last 5 to 10 years by taking a more careful look and using more sophisticated computer models this view has been overturned so that scientists know now that there is a significant impact on Europe in winter in particular. As Tim Stockdale of the European Centre for Medium Range Weather Forecasting reported in the BBC Radio 4 Inside Science special on El Niño (7.1.16) the very strong heating over the pool of warm water in a strong El Nino event pushes atmospheric flow upwards into the stratosphere. This is carried around the globe by normal circulation over the Atlantic in the Northern hemisphere and then propagates down again and this disrupts weather patterns. Though the dynamics are yet poorly understood, the net effect is to disturb the jet stream/ In the winter of 2015 the jet stream was pushed so that instead of coming down from the Arctic bringing cold weather with it, it was bringing up warm air from subtropical Atlantic over an extended time period, which is why December 2015 was both a record month for warmth and for rain.
It is not just where this moisture is going but also how much moisture there is. Atmospheric river combines strong winds and high amounts of moisture in ribbons that are typically 100-200 kilometres wide and thousands of kilometres long, and their impacts are often localised and prolonged. These are weather features and not particularly associated with El Nino but are becoming more frequent particularly in winter because of global warming. The extreme amounts of precipitation that fell in Cumbria in December 2015 are related to this unusual weather pattern. Because of the global climate now warmer there is more moisture in the air that fuels a greater intensity of rainfall. Atmospheric moisture is rising at the rate of 1% a decade, essentially making heavy rainfall events (which are part of weather) more severe. |
Did Framers Fear Direct Democracy?
Many of the framers of the U.S. Constitution feared direct democracy. Shortly after the Revolution, political power was largely invested in a wealthy, landowning elite who saw direct democracy in the hands of the masses as dangerous. They understood that power could not simply rest in the hands of the majority. So the writers of the Constitution formed a republic where powers were separated and the minority possessed significant influence.
1 Direct Democracy
In a direct democracy, citizens vote for policies and laws directly, not through elected representatives. In such a system, the majority wins. However, most citizens are not legal experts that can competently draft legislation. Thus, a much more common form of democracy is a representative democracy in which citizens elect individuals to represent their interests and make decisions for them. The framers of the Constitution understood that simple majority rule could deprive the minority of important rights, and for this reason they did not form a direct democracy but rather a representative democracy, or a republic.
2 The Republic
A key component of a republic is that the head of the government is popularly elected, not a king or queen. Lawmakers and other officials are also elected by the citizens to represent their interests in government. The framers felt the government needed to be tolerant and protect the rights of minority groups, which, they believed, would be difficult under a direct democracy. Thus, the framers created a federal government in which powers were separated between the various branches of government and between the states to give minority factions opportunities for representation. The framers also established a Supreme Court to ensure that new laws are legal and protect citizen rights.
3 Madison and Democracy
James Madison is largely viewed as the "Father of the Constitution." In his "Federalist Papers," Madison made a strong case for a republic and the separation of powers. He ardently advocated for political power for the minority, which could not happen under a direct democracy. In Madison’s "Federalist No. 10," he promoted a representative democracy in which common citizens would elect statesmen to represent their interests. Madison likewise believed in federalism, or the separation of powers, so that factions, or minority interests, would have multiple avenues to power. He saw factions as protection against tyranny and encouraged a form of government that would encourage and bolster factions.
4 Franklin's Republic
Another important contributor to the Constitution, Benjamin Franklin, believed wholeheartedly in the republic he helped create. Prior to the Constitution, Franklin had helped draft Pennsylvania’s constitution in 1776, an exercise that influenced him during the 1787 Constitutional Convention in Philadelphia where he functioned as elder advisor. It is reputed that Franklin told a woman, who asked what sort of government the convention created, “A republic, Madam, if you can keep it.” Given the increased pluralism of American society, it is a testament to the founders that the republic, rather than a direct democracy, created by the Constitutional Convention has survived.
- 1 Internet Encyclopedia of Democracy: American Enlightenment Thought
- 2 U.S. Department of State's Bureau of International Information Programs: Defining Democracy
- 3 Harvard Political Review: The Dangers of Direct Democracy
- 4 Cornell University Law School: Federalism
- 5 Southeast Missouri State University: James Madison's Federalist No. 10 and the American Political System
- 6 Springfield Technical Community College Shays' Rebellion, From Revolution to Constitution: Benjamin Franklin
- 7 National Constitution Center: Perspectives on the Constitution: A Republic, If You Can Keep It |
Dr. Neagoy combines her love for the arts and the sciences in the creation of innovative mathematics videos for teachers and students—about 60 in all, including Discovering Algebra with Graphing Calculators for Discovery Education, Mathematics: What’s the Big Idea? For the Annenberg Channel (now available on Annenberg Learner).
DISCOVERY EDUCATION: Ten 30-minute Videos on Discovering Algebra (Grades 7-10)
Discovering Algebra with Graphing Calculators
- Examining Probability and Random Number Generation
- Representing and Analyzing Data
- Graphing a Line
- Finding the Slope of a Line
- Exploring Quadratics
- Solving Systems of Equations
- Investigating Inequalities
- Understanding Exponential Functions
- Investigating Logarithmic Functions
- Plotting the Curve of Best Fit
ANNENBERG MEDIA: Eight 90-minute videos on the Big Ideas of Mathematics (Grades K-8)
Mathematics: What’s the Big Idea?
- Workshop 1: Patterns and Functions: What Comes Next? (90 min.) Mathematics is about patterns waiting to be found. This workshop demonstrates how students’ explorations of patterns can grow richer and more complex as they move through the grades.
- Workshop 2: Data: Posing Questions and Finding Answers (90 min.) From the earliest grades, students learn to connect situations, data, and graphs. This workshops shows data displays that can be developed through the grades.
- Workshop 3: Geometry: Castles and Shadows (90 min.) Shadows give two-dimensional representation to three-dimensional objects. Teachers discuss the intriguing relationship between two- and three-dimensional objects that are at the heart of geometry.
- Workshop 4: More Geometry: Quilts and Palaces (90 min.) Geometry appears in works of art, architectural wonders, and physical structures. This workshop explores geometrical figures, transformation, and connections to art and science.
- Workshop 5: Whole Numbers: Memory and Discovery (90 min.) What does it take to develop fluency with whole number calculations? This workshop compares algorithms and explores mental math strategies.
- Workshop 6: Ratio and Proportion: When Is a Third More Than a Half? (90 min.) Helps identify students’ misconceptions about fractions that hinder their understanding of later concepts. Program participants work with rational numbers and activities delaing with ratio, proportions, and equivalent fractions.
- Workshop 7: Algebra: It Begins in Kindergarten (90 min.) This workshop traces the fundamental concepts of algebra that students can develop through the grades.
- Workshop 8: The Future of Mathematics: Ferns and Galaxies (90 min.) The advent of new technologies allows for amazing mathematics that could not exist without computers. This program looks to future directions for mathematics in the 21st Century. |
This species was first identified during the 1970s, but it was not described until 1982 (4); it is believed that it evolved following hybridisation between E. helleborine and possibly E. leptochila var. dunensis(1). In South Wales a colony of plants that are identical in appearance to Young's helleborine has been discovered, but it is thought that they may have evolved from hybridisation between E. helleborine 'neerlandica' and a type of E. phyllanthes(1).
Originally, this species was found on clay soils in an oak wood, as well as on slightly acidic soils polluted with zinc and lead (4). More recently it has been found in Scotland growing on steep spoil heaps where deciduous trees are growing (4). It is typically found growing underneath regenerating trees, particularly birch, amongst light, patchy vegetation (1).
The main threats to this species are the destruction of spoil heaps, and the extraction of material from them. A further threat is the neglect of wooded areas where this orchid is found, leading to a dense closed canopy that creates too much shade for the orchid to thrive (3). Two colonies have been lost as a result of woodland clearance (4). It is thought that this species may be under-recorded (4).
Young's helleborine is of exceptional interest and requires further genetic research, as it appears to be a complex of hybrids that prosper in man-made habitats. The conservation of the few remaining sites is therefore of utmost importance (1). This orchid is a UK Biodiversity Action Plan priority species, and is listed under Plantlife's 'Back from the Brink' campaign (5).
A group of organisms living together, individuals in the group are not physiologically connected and may not be related, such as a colony of birds. Another meaning refers to organisms, such as bryozoans, which are composed of numerous genetically identical modules (also referred to as zooids or 'individuals'), which are produced by budding and remain physiologically connected.
A plant that sheds its leaves at the end of the growing season.
A species or taxonomic group that is only found in one particular country or geographic area.
Cross-breeding with a different species.
A floral leaf (collectively comprising the calyx of the flower) that forms the protective outer layer of a flower bud. (See <link>http://www.rbgkew.org.uk/ksheets/pdfs/flower.pdf</link> for a fact sheet on flower structure).
Embed this ARKive thumbnail link ("portlet") by copying and pasting the code below. |
Chondrosarcoma is a type of cancer that resembles the cartilage that coats the ends of bones and forms joints.
- Chondrosarcoma occurs primarily in adults, are rarely encountered during the adolescent years and almost never affect young children.
- Chondrosarcoma most commonly occurs in cartilage found in the femur, humerus, shoulder, ribs and pelvis.
- It can occur inside the bone or on the surface of the bone.
- It can be a rapidly growing invasive tumor or it can develop slowly, causing less severe symptoms and sometimes never spreading.
Cancer research at Boston Children's Hospital
Dana-Farber/Children's Hospital Cancer Care researchers are conducting numerous research studies that will help clinicians better understand and treat all kinds of tumors. Some types of treatment currently being studied include:
- Angiogenesis inhibitors - substances that may be able to prevent the growth of tumors by blocking the formation of new blood vessels that feed the tumors
- Biological therapies - a wide range of substances that may use the body's own immune system to fight cancer or lessen harmful side effects of some treatments |
Climate scientists have already warned that carbon released by thawing permafrost will only accelerate the pace of climate change. A newly-published paper has added weight to those warnings, while another introduces a new concern: the release of large quantities of toxic mercury.
In a paper published in the academic journal Environmental Research Letters, Joshua Dean, postdoctoral researcher at Vrije Universiteit Amsterdam, and several colleagues examine the quantity of “old carbon” – carbon pre-dating the industrial age – in the headwaters of the western Canadian Arctic. This may be evidence of the destabilization of old carbon and the phenomenon scientists call ‘permafrost carbon feedback,’ whereby a warming climate releases stored carbon from permafrost, which in turn accelerates the pace of climate warming.
There’s a great deal of carbon stored in permafrost, perhaps the equivalent of half the carbon emitted by fossil fuels since the dawn of the Industrial Revolution. Previous studies had been hampered by their inability to distinguish ‘old’ from ‘new’ carbon.
This paper finds that 30% to 40% of carbon in headwaters predates the industrial era. The evidence does not yet show that permafrost carbon feedback is underway. However, the findings in this paper can serve as a baseline. Subsequent studies will be needed to assess whether permafrost carbon feedback is becoming an issue.
More conclusive are the findings in a paper published in Geophysical Research Letters, authored by Paul F. Schuster of the U.S. Geological Survey and several colleagues. According to that paper, permafrost soils in the Northern Hemisphere hold nearly twice as much mercury as all other soils, the ocean and the atmosphere combined.
The paper estimates that between 695,000 tonnes and 2.6 million tonnes of natural mercury are found in Northern Hemisphere permafrost. Roughly half of that tonnage is frozen in permafrost. Climate science estimates a reduction in the area of Northern Hemisphere permafrost by 2100 of anywhere from 30% to 99%, raising the possibility of a large release of mercury into the atmosphere, “with unknown consequences to the environment.” |
A bundle of laser beams creates five artificial stars in the night sky above Mount Hopkins in Southern Arizona. Laser light reflected by air molecules is analyzed by a computer that drives the actuators on the adaptive mirror.
Credit: Thomas Stalcup. [Full Story]
Scientists have successfully tested a new type of laser-corrected vision for telescopes that takes the widest starry-sky views ever seen from the ground while eliminating blur caused by the atmosphere.
Now astronomers can see entire single star clusters or many distant galaxies within the same field of view. That allows for more efficient use of expensive telescopes and observing time to tackle challenges such as examining thousands of early, distant galaxies. [Photo of the telescope laser in action.]
"You need to look at large patches of sky within one shot, and you need to do it at high resolution," said study leader Michael Hart, an astronomer at the University of Arizona in Tucson.
Sharp wide views of space
The method developed by Hart's group cancels out atmospheric turbulence across a telescope view about one-fifteenth the diameter of a full moon. Its success will likely spread to the new class of 98-foot (30-meter) telescope giants such as the Giant Magellan Telescope planned for development in Chile.
Such work represents a major update of adaptive optics technology that has been around for decades. Ground telescopes use adaptive optics to adjust for the ever-changing blurry effect that comes from peering at space through Earth's atmosphere, but can erase the blurriness only in a tiny view of the sky.
In adaptive optics, computers analyze the light from a natural or artificial guide star as a baseline to figure out the blurriness. Hundreds of actuators can then warp the surface of the telescope mirrors thousands of times per second to cancel out the blurry effect.
The new ground-layer adaptive optics system used five lasers mounted on the 21-foot (6.5-meter) MMT telescope at Mount Hopkins in Arizona. Past systems on other telescopes have used just one laser to create a single artificial guide star.
Each laser points in a different direction so that they end up spread out in a pentagon pattern as they punch more than 15 miles (24 km) into the sky.
But the lasers are angled so that the light reflected back to the telescope aperture is just from the lowest layer of atmosphere, within one- third of a mile (500 meters) from the ground. Software can then pick out the common blurry signal from that part of the atmosphere and adjust for it.
"If you correct the deleterious effects of the first few hundred meters of atmosphere, you go an awfully long way to fixing everything," Hart told SPACE.com. "You also get a wide field of view, because the ground layer is close to telescope."
The study is detailed in the August 4 issue of the journal Nature.
Wider view, lower resolution
That wide-view success came at the cost of resolution, so that the images, while sharp, don't appear quite as sharp as those seen with traditional adaptive optics.
But the tradeoff often becomes worthwhile. For instance, researchers want a wide enough field of view to see an entire star cluster, as well as enough resolution to pick out the motions of individual stars.
Next up, Hart's team wants to correct for a much greater layer of atmospheric interference by creating a 3-D model of the turbulence. Telescopes would also need a stack of several adjustable mirrors to fix the blurriness in 3-D and get back some of the higher resolution ? an approach known as multi-conjugate adaptive optics.
"What we're doing is shining laser beams through the atmosphere in all directions, so we can build an instantaneous snapshot of the atmosphere millisecond by millisecond," Hart explained.
The first-ever use of multiple lasers to create many guide stars also becomes crucial for the new giant telescopes under development, including the Giant Magellan Telescope, the Thirty-Meter Telescope and the European Extremely Large Telescope. Each instrument costs well within the range of $1 billion.
"If you're going to spend that amount of money, you better be darn sure that the telescope is able to produce the best science it can, or it's not worth the money," Hart said.
- Earth's Most Important Telescopes
- The 10 Most Amazing Hubble Discoveries
- Bigger, Better Space Telescopes Following In Hubble's Footsteps |
Dunne and her colleagues analyzed tiny fragments of pottery taken from the Takarkori rock shelter, a prehistoric dwelling in the Libyan Sahara. They ground up small pieces of the pottery, conducting chemical analyses to investigate the proteins and fats embedded in the shards. By doing so, the researchers could see what the pots once held.
They found evidence of a varied diet, with signs found for plant oils and animal fat. The most common fats were of animal origin, Dunne said, with some deriving from flesh and others from milk. The most dairy-fat rich pottery shards came from the same time periods when more cattle bones are found in the cave layers, the researchers reported today (June 20) in the journal Nature.
By looking at variations in the carbon molecules in these preserved fats, the researchers were able to get an idea of what kind of plants the cattle were eating. They found their diets varied between so-called C3, or woody plants, and C4 plants, which include grasses grains and dry-weather plants. (C3 and C4 refer to the type of photosynthesis these plants use.)
That fits with the archaeological understanding of this early herding civilization as moving between seasonal camps, Dunne said. [Album: Faces of a Threatened Tribe] |
How can the answer be improved. Chemosynthesis process by which some organisms use chemical energy instead of light energy as an energy source to make their own food photosynthesis. Start studying photosythesis learn vocabulary, terms, and more with flashcards, games, and other study tools.
In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules hydrogen sulfide chemosynthesis process. Chemosynthesis process that uses chemicals to produce food chloroplast organelle in the cell where photosynthesis occurs stomata pores in the leaves of plants. Photosynthesis is the process by which plants use the sun’s energy to make sugar (glucose) for food plants absorb energy from sunlight, take in carbon dioxide from the air through their leaves, take up water through their roots, and produce glucose and oxygen. Photosynthesis: photosynthesis, process by which green plants and certain other organisms transform light energy into chemical energy.
Start studying bio chapter 4 terms- photosynthesis & cellular respiration learn vocabulary, terms, and more with flashcards, games, and other study tools. Chemosynthesis process by which bacteria in the deep ocean use chemicals found in the deep ocean to produce food cellular respiration. Start studying photosynthesis: sugar as food learn vocabulary, terms, and more with flashcards, games, and other study tools. 32 energy, producers, consumers autotroph, primary producer, photosynthesis, chemosynthesis, heterotroph, consumer process by which some organisms.
This lesson introduces the concept of chemosynthesis it explains that energy is necessary for all life and provides a description of the chemosynthetic process. Chemosynthesis is a biological process that uses inorganic compounds (rather than sunlight as in photosynthesis) as the energy source to convert.
Quizlet provides term:photosythesis = process producers use to get energy activities, flashcards and games start learning today for free. Chemosynthesis process by which some organisms, such as certain bacteria, use chemical energy to produce carbohydrates photosynthesis.
Chemosynthesis is a process used to produce energy through the oxidation of chemicals most organisms that use chemosynthesis are. Photosynthesis and chemosynthesis are both processes by which organisms produce food photosynthesis is powered by sunlight while. Chemosynthesis is the process by which certain microbes create energy by mediating chemical reactions so the animals that live around hydrothermal vents make. |
Think Through Math is a Web-based solution that provides adaptive math instruction for students in grades 3 through Algebra 1. Developed by teachers and technologists to help students prepare for rigorous Common Core and TEKSstandards and assessments, Think Through Math supports math achievement in unprecedented ways.
Holding the Rubik's Cube, twisting and turning the parts, can help children of all ages grasp important math concepts including area, perimeter, volume, angles, algorithms and enumeration, among many other geometry and algebraic topics. Some teachers are even using the Rubik's Cube to teach life lessons and 21st century skills such as focus, following directions, memorization, sequencing, problem solving, critical thinking, and perseverance.
90,000 of the best questions from NY Regents, State Assessments, Academic Competitions, and more.Search by topic or exam. Select, arrange, and format questions the way you like. Create beautiful classroom materials in just minutes!
We Seed Learn is where you can, well, learn everything you need to know about investing and the stock market. Okay, well maybe not everything you need to know, but definitely enough to get started.
When you're ready, start with Level 1 and work your way up to Level 3. See, how easy this is? At the end of each level, there's a quick quiz just to make sure you were paying attention. Get a perfect score on each quiz and it'll go straight into your Report Card so the whole world will know how brilliant you are.
Led by Jo Boaler, the NRICH project aims to enrich the mathematical experiences of students by providing professional development, activities, and lesson planning. Check out the “For Teachers” section on the right-hand side for tasks, games, interactive tools, activity sets, curriculum mapping, and great ideas for group work.
The Writing Workshop, similar to the Reading Workshop, is a method of teaching writing using a workshop method. Students are given opportunities to write in a variety of genres and helps foster a love of writing. The Writing Workshop allows teachers to meet the needs of their students by differentiating their instruction and gearing instruction based on information gathered throughout the workshop.
The Mathematics area of study is designed to build a strong foundation in mathematical understanding and procedural skills, as well as to prepare students to meet the standards for 21st Century critical thinking and problem solving. The Mathematics curriculum includes the areas of ratios and proportional relationships, the number system, expressions and equations, geometry, and statistics and probability. The courses are aligned with the Common Core State Standards and designed to cover the equivalent of a year-long, traditional school curriculum. |
Scientists have detected the largest molecules ever seen in space, in a cloud of cosmic dust surrounding a distant star. The football-shaped carbon molecules are known as buckyballs, and were only discovered on Earth 25 years ago when they were made in a laboratory.
These molecules are the "third type of carbon" - with the first two types being graphite and diamond.
The researchers report their findings in the journal Science. Buckyballs consist of 60 carbon atoms arranged in a three-dimensional sphere. The atoms are linked together in alternating patterns of hexagons and pentagons that, on the molecular scale, look exactly like a football.
They belong to a class of molecules called buckministerfullerenes - named after the architect Richard Buckminister Fuller, who developed the geodesic dome design that they so closely resemble.
The research group, led by Jan Cami from the University of Western Ontario in Canada, made its discovery using Nasa's Spitzer infrared telescope. |
flat, after all
flat, after all
South Pole balloon flight confirms a Euclidean law
Ainsworth, Public Affairs
The first detailed images of the universe in its infancy, obtained by an 800,000-cubic-meter balloon carrying microwave detectors, has settled a longstanding debate over the shape of the universe.
Data from 3 percent of the sky, taken during a 5,000-mile journey around the Antarctic, provided Andrew Jaffe, an astrophysicist at Berkeley, and an international team of scientists with tens of thousands of pixels and close to one billion measurements of cosmic microwave background radiation, which filled the universe shortly after the Big Bang. The data support the widely held view that the universe is, indeed, flat, and not curved.
"Wow, wow and wow," said Michael Turner, an astrophysicist at the University of Chicago. "Wow number one: Euclid was right, the universe is flat. Wow number two: inflation, our boldest and most promising theory of the earliest moments of creation, passed its first very important test. And wow number three: this is just the beginning. We are on our way to a better understanding of the universe back through time, when the largest structures in the universe were protons."
The project, called BOOMERANG (Balloon Observations of Millimetric Extragalactic Radiation and Geophysics), obtained images from an extremely sensitive microwave telescope that was suspended from a balloon circumnavigating the South Pole in late 1998. A map published in the April 27 issue of Nature, and results in the May 8 issue of Astrophysical Journal Letters show the most detailed glimpse yet of the primordial universe, revealing the shape of the cosmos and the distribution of matter shortly after its birth.
Today, the universe is filled with galaxies and clusters of galaxies. But 12 to 15 billion years ago, just after the Big Bang, it was very smooth, incredibly hot and dense. The intense heat that filled the embryonic universe is still detectable today as a faint glow of microwave radiation that is visible in all directions. That radiation, known as the cosmic microwave background, was the centerpiece of NASA's Cosmic Background Explorer, which discovered the first evidence of structures, or spatial variations, in this background radiation.
BOOMERANG, so named because it circled and returned to its original departure site, mapped tiny temperature differences in the cosmic background radiation that was present when the universe was smooth and filled only with protons, electrons and other charged particles. From a map of these temperature fluctuations, the researchers were able to derive a "power spectrum," a curve that registers strength of the fluctuations on different angular scales, and which contains information on such traits of the universe as its geometry and how much matter and energy it contains.
"The data are remarkably clear," said Jaffe, a member of the university's Center for Particle Astrophysics and Space Sciences Laboratory. "You can write down everything you know about the data and then calculate the most likely power spectrum, a task that is conceptually simple but computationally challenging."
Background radiation in the early universe, about 300,000 years after a violent subatomic explosion burst from the vacuum of space and spewed a primordial soup of boiling particles outward, cooled to about minus 450 degrees Fahrenheit and began to form clusters of matter. BOOMERANG was able to detect minute temperature variations of less than 1/1000th of a degree in what was once a hot plasma sheet of almost perfectly uniform temperature.
"We were able to see temperature variations that differed by only hundreds of millionths of a degree," said Turner. "The pattern of hot and cold spots on the universe showed us that the shape of the universe is undeniably flat."
"This is good confirmation of that standard cosmology, and a large triumph for science," said Paul Richards, a Berkeley professor of physics on the MAXIMA experiment, "because we are talking about predictions made well before the experiment, about something as hard to know as the very early universe."
The findings bring astrophysicists closer to a theory of space and time in which inflation caused the universe to stretch rather than balloon into a sphere.
New snapshots of the cosmic microwave background will help solve the riddle of how much light matter and dark matter exists in today's universe. |
|Product #: TCR2919|
DIFFERENTIATED NONFICTION READING
Here's a way to teach the same grade-level content to students with varying reading skills! The same information is written at three different levels: below grade level, at grade level, and above grade level. All the students in your class can read the passage and have the information they need to respond to the same six questions that evaluate their comprehension of the subject matter. The curriculum topics for science, geography, history, and language arts are correlated to the McREL standards and benchmarks. The reading levels of the passages are calculated according to the Flesch-Kincaid Readability Formula.
Submit a review |
Sore throat is a painful inflammation of the mucous membranes lining the pharynx.
Sore throat is also called pharyngitis. It is a symptom of many conditions, but is most often associated with colds or influenza . Sore throat may be caused by either viral or bacterial infections or environmental conditions. Most sore throats heal without complications, but they should not be ignored, as some develop into serious illnesses.
Sore throats can be either acute or chronic. Acute sore throats are more common than chronic sore throats. They appear suddenly and last from three to about seven days. A chronic sore throat lasts much longer and is a symptom of an unresolved underlying condition or disease, such as a sinus infection.
The way in which a sore throat is transmitted depends on the agent causing the sore throat. Viral and bacterial sore throats are usually passed in the same way as the common cold : sneezing, coughing, sharing drinking glasses or silverware, or in any other way germ particles can easily move from one person to another. Some sore throats are caused by environmental factors or allergies . These sore throats cannot be passed from one person to another.
Almost everyone gets a sore throat at one time or another, although children in child care or grade school have them more often than adolescents and adults. Sore throats are most common during the winter months when upper respiratory infections (colds) are more frequent.
About 10 percent of children who go to the doctor each year have pharyngitis. Forty percent of the time that children are taken to the doctor with a sore throat, the sore throat is diagnosed as viral. An antibiotic cannot help to cure a virus; a virus has to be left to run its course.
In about 30 percent of the cases for which children are taken to the doctor, bacteria are found to be responsible for the sore throat. Many of these bacterial sore throats are cases of strep throat . Sore throats caused by bacteria can be successfully treated with antibiotics . In about 40 percent of these cases of pharyngitis, it is never clear what caused the sore throat. In these cases it is possible that the virus or bacteria was not identified, or that other factors such as environment or post-nasal drip may have been responsible.
Causes and symptoms
Sore throats have many different causes, and may or may not be accompanied by cold symptoms, fever , or swollen lymph glands. Proper treatment depends on understanding the cause of the sore throat.
Viral sore throat
Viruses cause most sore throats. Cold and flu viruses are the main culprits. These viruses cause an inflammation in the throat and occasionally the tonsils ( tonsillitis ). Cold symptoms usually accompany a viral sore throat. These can include a runny nose, cough , congestion, hoarseness, conjunctivitis , and fever. The level of throat pain varies from uncomfortable to excruciating, when it is painful for the patient to eat, breathe, swallow, or speak.
Another group of viruses that causes sore throat are the adenoviruses. These may also cause infections of the lungs and ears. In addition to a sore throat, symptoms that accompany an adenovirus infection include cough, runny nose, white bumps on the tonsils and throat, mild diarrhea , vomiting , and a rash. The sore throat lasts about one week.
A third type of virus that can cause severe sore throat is the coxsackie virus. It can cause a disease called herpangina. Although anyone can get herpangina, it is most common in children up to age 10 and is more prevalent in the summer or early autumn. Herpangina is sometimes called summer sore throat.
Three to six days after being exposed to the coxsackie virus, an infected person develops a sudden sore throat that is accompanied by a substantial fever, usually between 102–104°F (38.9–40°C). Tiny grayish-white blisters form on the throat and in the mouth. These fester and become small ulcers. Throat pain is often severe, interfering with swallowing. Children may become dehydrated if they are reluctant to eat or drink because of the pain. In addition, children with herpangina may vomit, have abdominal pain, and generally feel very ill.
One other common cause of a viral sore throat is mononucleosis. Mononucleosis occurs when the Epstein-Barr virus infects one specific type of lymphocyte. The infection spreads to the lymphatic system, respiratory system, liver, spleen, and throat. Symptoms appear 30–50 days after exposure.
Mononucleosis, sometimes called the kissing disease, is extremely common. It is estimated that by the age of 35–40, 80–95 percent of Americans will have had mononucleosis. Often, symptoms are mild, especially in young children, and are diagnosed as a cold. Since symptoms are more severe in adolescents and adults, more cases are diagnosed as mononucleosis in this age group. One of the main symptoms of mononucleosis is a severe sore throat.
Although a runny nose and cough are much more likely to accompany a sore throat caused by a virus than one caused by a bacteria, there is no absolute way to tell what is causing the sore throat without a laboratory test.
Bacterial sore throat
Fewer sore throats are caused by bacteria than are caused by viruses. The most common bacterial sore throat results from an infection by group A Streptococcus . This type of infection is commonly called strep throat. Anyone can get strep throat, but it is most common in school age children.
Noninfectious sore throat
Not all sore throats are caused by infection. Postnasal drip can irritate the throat and make it sore. It can be caused by hay fever and other allergies that irritate the sinuses. Environmental and other conditions, such as breathing secondhand smoke, breathing polluted air or chemical fumes, or swallowing substances that burn or scratch the throat can also cause pharyngitis. Dry air, like that in airplanes or from forced hot air furnaces, can make the throat sore. Children who breathe through their mouths at night because of nasal congestion often get sore throats that improve as the day progresses. Sore throat caused by environmental conditions is not contagious.
When to call the doctor
If the child has had a sore throat and fever for more than 24 hours, a doctor should be contacted so a strep test can be performed. Identifying and treating strep throat within about a week is vital to preventing rheumatic fever . If the child has had a sore throat, even without fever, for more than 48 hours, the doctor should be consulted. If the child has trouble swallowing or breathing, or is drooling excessively (in small children), emergency medical attention should be sought immediately.
It is easy for people to tell if they have a sore throat, but difficult to know what has caused it without laboratory tests. Most sore throats are minor and heal without any complications. A small number of bacterial sore throats do develop into serious diseases. Because of this, it is advisable to see a doctor if a sore throat lasts more than a few days or is accompanied by fever, nausea , or abdominal pain.
Diagnosis of a sore throat by a doctor begins with a physical examination of the throat and chest. The doctor will also look for signs of other illness, such as a sinus infection or bronchitis . Since both bacterial and viral sore throat are contagious and pass easily from person to person, the doctor will seek information about whether the patient has been around other people with flu, sore throat, colds, or strep throat. If it appears that the patient may have strep throat, the doctor will do laboratory tests.
If mononucleosis is suspected, the doctor may do a mono spot test to look for antibodies indicating the presence of the Epstein-Barr virus. The strep test is inexpensive, takes only a few minutes, and can be done in a physician's office. An inexpensive blood test can also determine the presence of antibodies to the mononucleosis virus.
Effective treatment varies depending on the cause of the sore throat. Viral sore throats are best left to run their course without drug treatment, because antibiotics have no effect on a viral sore throat. They do not shorten the length of the illness, nor do they lessen the symptoms.
Sore throat caused by streptococci or another bacteria must be treated with antibiotics. Penicillin is the preferred medication, although other antibiotics are also effective if the child is allergic to penicillin. Oral penicillin must be taken for 10 days. Patients need to take the entire amount of antibiotic prescribed, even after symptoms of the sore throat improve. If it is unlikely that the parent will be able to ensure that the child will take the full course of antibiotics, a one-time injection of antibiotics can be administered instead. Cessation of the antibiotic early can lead to a return of the sore throat.
Because a virus causes mononucleosis, there is no specific drug treatment available. Rest, a healthy diet, plenty of fluids, limiting heavy exercise and competitive sports , and treatment of aches with acetaminophen (Datril, Tylenol, Panadol) or ibuprofen (Advil, Nuprin, Motrin, Medipren) will help the illness pass. Nearly 90 percent of mononucleosis infections are mild. The infected person does not normally get the disease again.
In the case of chronic sore throat, it is necessary to treat the underlying disease to heal the sore throat. If a sore throat is caused by environmental factors, the aggravating stimulus should be eliminated from the sufferer's environment.
Home care for sore throat
Regardless of the cause of a sore throat, there are some home care steps that people can take to ease their discomfort. These include:
- taking acetaminophen or ibuprofen for pain (aspirin should not be given to children because of its association with increased risk for Reye's syndrome , a serious disease)
- gargling with warm double strength tea or warm salt water made by adding 1 tsp of salt to 8 oz (237 ml) of water
- drinking plenty of fluids, but avoiding acid juices such as orange juice, which can irritate the throat (sucking on popsicles is a good way to get fluids into children)
- eating soft, nutritious foods like noodle soup and avoiding spicy foods
- resting until the fever is gone, then resuming strenuous activities gradually
- using a room humidifier to make sore throat sufferers more comfortable
- using antiseptic lozenges and sprays with caution, as they may aggravate the sore throat rather than improve it
Alternative treatment focuses on easing the symptoms of sore throat using herbs and botanical medicines.
- Aromatherapists recommend inhaling the fragrances of the essential oils of lavender ( Lavandula officinalis ), thyme ( Thymus vulgaris ), eucalyptus ( Eucalyptus globulus ), sage ( Salvia officinalis ), and sandalwood.
- Ayurvedic practitioners suggest gargling with a mixture of water, salt, and tumeric ( Curcuma longa ) powder or astringents such as alum, sumac, sage, and bayberry ( Myrica spp.).
- Herbalists recommend taking osha root ( Ligusticum porteri ) internally for infection or drinking ginger ( Zingiber officinale ) or slippery elm ( Ulmus fulva ) tea for pain.
- Homeopaths may treat sore throats with superdilute solutions of Lachesis , Belladonna , Phytolacca , or yellow jasmine ( Gelsemium ).
Nutritional recommendations include zinc lozenges every two hours along with vitamin C with bioflavonoids, vitamin A, and beta-carotene supplements. Although it may hurt to swallow, it is very important that
Sore throat caused by a viral infection generally clears up on its own within one week with no complications. The exception is mononucleosis. Ninety percent of cases of mononucleosis clear up without medical intervention or complications, so long as dehydration does not occur. In young children, the symptoms may last only a week, but in adolescents the symptoms usually last longer. In all age groups, fatigue and weakness may continue for up to six weeks after other symptoms disappear.
In rare cases of mononucleosis, breathing may be obstructed because of swollen tonsils, adenoids, and lymph glands. If this happens, the individual should seek emergency medical care immediately.
Patients with bacterial sore throat begin feeling better about 24 hours after starting antibiotics. Untreated strep throat has the potential to cause scarlet fever , kidney damage, or rheumatic fever. Scarlet fever causes a rash and can cause high fever and convulsions. Rheumatic fever causes inflammation of the heart and damage to the heart valves. Taking antibiotics within the first week of a strep infection will prevent these complications. People with strep throat remain contagious until they have taken antibiotics for 24 hours.
There is no way to prevent a sore throat; however, the risk of getting one or passing one on to another person can be minimized by:
- washing hands well and frequently
- avoiding close contact with someone who has a sore throat
- not sharing food and eating utensils with anyone
- staying out of polluted air
Viral sore throats usually resolve themselves fairly quickly although they may be very uncomfortable. If the child has a fever and sore throat for more than 24 hours it may be a sign of a bacterial infection and the child should be taken to the doctor. Prompt treatment with antibiotics for strep throat is important because it can prevent rheumatic fever, a serious disease that can cause damage to the heart.
Antigen —A substance (usually a protein) identified as foreign by the body's immune system, triggering the release of antibodies as part of the body's immune response.
Lymphocyte —A type of white blood cell that participates in the immune response. The two main groups are the B cells that have antibody molecules on their surface and T cells that destroy antigens.
Pharynx —The throat, a tubular structure that lies between the mouth and the esophagus.
See also Common cold ; .
"Coughs, Colds, and Sore Throat." Practice Nurse v.26, i.2 (July 25, 2003): 38.
Dinelli, D.L. "Sore Throat and Difficulty Breathing." American Family Physician 63, no. 11 (June 1, 2001): 2255.
"Sore Throat." The Journal of the American Medical Association 291, no. 13 (April 7, 2004): 1664.
Vincent, Miriam T., Celestin, Nadhia, Hussain, Aneela N. "Pharyngitis." American Family Physician 69, no. 6 (March 15, 2004): 1465.
Tish Davidson, A.M. |
What does the SLP do in my child's school? -Screen students to find out if they need further speech and language testing. -Evaluate speech and language skills. -Decide, with the team, whether the child is eligible for services. -Work with the team to develop an individualized education program or IEP. IEPs are written for students who qualify for services under Federal and state law. The IEP lists goals for the student. -Work with children who are at risk for communication and learning problems. -Determine if children need specialized instruction called response to intervention or RTI. -Makes sure that communication goals support student’s learning and social skills. -Keeps track of progress on speech-language goals. -Researches ways to help children do their best in school. -Gives resources and information to students, staff and parents to help them understand communication.
SLPs help with communication and swallowing problems that include: -Articulation – how we say sounds and put them together in words. Children may say one sound for another, leave out a sound or have problems saying certain sounds clearly. -Children who are not able to speak at all and need help learning other ways to communicate -Language – vocabulary, concepts and grammar. Includes how well words are used and understood. Language problems can lead to reading and writing problems too. -Social communication – how to take turns, how close to stand to someone when talking, how to start and stop a conversation and following the rules of conversation. Voice – how we sound when we speak. The voice may sound hoarse or nasal. A child may lose his/her voice easily, or may speak in a voice that’s too loud, too soft, too high or too low. This is often evaluated by an ENT as well. Stuttering – also called a fluency disorder is how well our speech flows. Children may have trouble starting to speak or may repeat sounds, syllables, words or phrases. -Cognitive Communication (thinking and memory) – includes problems with long term or short term memory, attention, problem solving or staying organized. -Feeding and swallowing also called dysphagia is how well we chew and swallow food and liquid. Swallowing problems can make it hard for your child do to well in school and may lead to other health problems.
Create your own unique website with customizable templates. |
Pteranodons are the greatest flying reptiles that ruled the airs of late Cretaceous and are thought to be extinct around 65 million years ago. They are still alive in our current world!
Here are a few facts about these flying reptiles:
- These reptilian ancestors can fly, thanks to their huge leathery wings which are extension of their forelegs.
- Their wingspan (from the left wing’s tip to the right wing’s tip) even touched about 8 meters in a few species.
- The hind legs of these graceful fliers were not so strong making these animals appear to be very awkward on land.
- They had bones filled with air to make sure that they got enough lift.
- Even with their strong and huge wingspan, they cannot lift into the sky right from where they stand.
- They normally nested and roosted in high cliffs and initiate their flight by jumping off the cliff. They were more like gliders than current birds which are more like helicopters.
- Since the leathery wings are more than enough to take turns in mid air, most species got rid of their tails or had very short stumpy tails in a view to reduce flying baggage.
In Picture: Artist’s rendering of a Pteranodon (Image Source)
- On cold mornings, these magnificent reptiles had to wait for warm currents to rise towards the sky so that they can use the thermals to rise themselves into the sky, even from the cliffs.
- They eat fish and tiny water creatures such as the crustaceans.
- The pteranodons used pair of long beak like structures to catch fish and anything that swam on the surface of the water.
- The mouth had a long and narrow upper beak, which is a single piece and supported by the lower beak that had two bones coming together to be fused at the front end. This allowed the reptiles to grab large fishes.
- Moreover, the bottom jaws had an extendible skin that would hold as many fish as possible to ensure that the parenting pteranodons had to make least amount of trips back and forth to the nest!
If they are so huge, how can anyone miss them while they are flying in today’s world! Even in smaller dimensions, one would not miss on these reptiles.
- These pteranodons did shrink in size and retained their bills while losing their crests.
- However, they did not lose their feeding behaviors or their dominating bullying attitudes.
- They still possess single piece upper bill and two-piece lower bill with the pouch hanging from below it.
In Picture: The skull of a living pteranodon and its egg.
- They still lay eggs and develop chicks by feeding them with the fish and creatures of water.
- They transformed their leathery wings to feathery wings, which means that their tail tufts also gained feathers.
- Even their names have changed, but they have retained the same starting and ending letters “P” and “N”.
So, the living “Pteranodon” is currently known to us as a “Pelican”. |
The Ankole cattle of Uganda boast long, curved horns. The breed has thrived in this eastern African country for millennia thanks to its ability to subsist on poor forage and limited water [see video here]. But facing growing consumer demand for milk, these native cattle are increasingly being replaced by European Holstein-Friesian cows—known for their distinctive black patches and their ability to produce prodigious quantities of milk—and the U.N. Food and Agriculture Organization (FAO) warns that Ankole could become extinct within 20 years.
But during a recent drought, herds of such European imports were wiped out, while farmers who still relied upon Ankole found their cattle were sturdy enough to walk the extra mile to water. Such hardiness in the face of locally tough conditions is a hallmark of the regional versions of various livestock breeds, such as worm-resistant Red Maasai sheep [see video here] and Sheko cattle with their immunity to sleeping sickness [see video here]. Given the threats posed by climate change and evolving disease, researchers say that holding on to that diversity may prove key to ensuring steady supplies of fresh milk and other animal products.
Agricultural economist Carlos Sere, director general of the International Livestock Research Institute (ILRI), and his colleagues recently completed a global inventory of the 7,000 existing? livestock breeds for the FAO. Of these, 40—such as the Holstein-Friesian cow, White Leghorn chicken, and Large White pig—form the basis of industrial agriculture in the developed world; in fact, 90 percent of cattle in the industrialized world come from just six breeds.
Thanks to the ability of those 40 breeds to produce massive amounts of milk, eggs and meet, there has been a worldwide explosion in their use to meet the needs of both farming communities and consumers in the developed and, increasingly, developing world For example, Vietnam boasted local breeds for 72 percent of its pigs in 1994 but by 2002 saw them drop to 26 percent of pigs, replaced in large part by the faster-growing Large White pig.
"In pursuit of quick wins to increase productivity to meet demand in developing countries, a strategy adopted by these nations over the last half century has been importation of specialized, high-producing breeds," Sere says. This happened "in the absence of adequate information on the robustness, hardiness [and] appropriateness of native breeds versus imports."
As a result, 696 breeds have become extinct since 1900 and some 1,487 breeds are at risk, including 579 in imminent danger of disappearing. This will cause a loss of genetic diversity that might enable resistance to future disease outbreak (such as avian flu) or adverse environmental conditions. "The unpredictability of the nature and scope of these changes,'' Sere argues, "demand that these genetic options be safeguarded."
Lost traits would be difficult to re-create, he says, because modern livestock lack living wild relatives—a reserve of potentially useful genes that has saved agricultural crops in the past. "We do not even know the traits we will need in the future and which of the present breeds possess the requisite genes," he says.
One billion people, largely in the developing world, raise livestock and 70 percent of those living in extreme poverty rely on this practice to survive. And the wealthier people are, the greater their appetite for meat, eggs and milk grows. "We expect that the developing world will double their consumption of animal products in the next 20 years," Sere notes. "This has to happen from basically the same or shrinking resources," given the environmental problems associated with intensive animal husbandry.
As a result of these findings—and looming demand—the ILRI is urging action to preserve native breeds. Among its recommendations: encourage local farmers to maintain local types of livestock; share breeds across borders (particularly when breeds are likely to be suited to specific regions due to certain adaptive traits, such as a need for less water); and establish a gene bank—similar to the regional seed banks that preserve crop diversity—to maintain this genetic heritage.
The greatest part of genetic heritage—and the largest numbers of livestock in general—are concentrated in the developing world. While some gene banks exist in developed regions, Sere says that more—and broader—banks will also be needed in the rest of the world. "It is not good enough for southern countries to depend on the North to be custodians of their livestock genetic material," he says. "The fastest and most effective route through which the North can make a contribution [to diversity] is to assist developing nations to establish capacity to save endangered breeds in these countries." |
At the extent of a Jumping Spider's physical maturity in adulthood, this tiny little spider is can grow to between 0.59 and 0.86 inches (15 and 22 millimeters), and can jump over distances up to three times its own size. This impressive jumping prowess is a mechanism used as a hunting technique that has been developed by these spiders through the ages. In order achieve such phenomenal leaps and bounds, Jumping spiders will undergo a sudden change in blood pressure, which helps propel the spider upwards in the direction it wishes to go. What’s especially unique about this spider is not necessarily its jumping technique, but rather the fact that it is a member of the the largest known family of spiders, Salticidae, an arachnid Family containing 13% of all extant spider species. Jumping spiders have a total of six legs and four eyes, along with a fuzzy outer coat, which may exhibit any of a wide range of colors.
Though primarily carnivores dining upon other small arthropods, Jumping spiders may at times consume plant matter as well, especially from nectar-producing flora. As stated above, the Jumping Spider uses surprise to its advantage by jumping on its prey from far away. Along with this, the spider also uses its four eyes to locate its prey, even at night. These eyes are also responsible for the peripheral vision and motion detection abilities of the spider. The Jumping spiders will also feed on blood-fed mosquitoes, which has left them dubbed with the nickname of “Vampire Spider”.
Habitat and Range
The Jumping Spider Family (Salticidae) contains more than 500 known Genera and almost ten times as many known species. Certain species of these spiders span all over the United States, Canada, and Mexico, usually located in wooded areas. In fact, their range extends over most of the terrestrial globe, excluding the Arctic and Antarctic Regions, from hot, arid deserts to the highest Himalayan peaks. That being, said and for no apparent reason, the Jumping Spider will opt not to live in mature hardwood forests. Some researchers theorize that the trees therein could omit a smell perceived as being unpleasant to the spider.
Like all spiders, they creep slowly unless hunting or under threat. Under such times of perceived danger, these spiders can jump over distances up to three times their own sizes, which attributes to one of their most unique qualities. These Spiders also don't weave webs and, also unlike most spiders, they do produce milk-like secretions. They do still secrete silk, however, but this is not used for webs. Instead, silk is used to mark retreats from initial jumping spots as a sort of "safety harness" in case a jump does not go as planned.
Jumping Spiders of both sexes utilize a variety of bodily vibrations and shake-like dancing, in addition to producing a wide-ranging array of auditory buzzes and high-pitched noises, in order to attract potential breeding partners. As males often will fight for mating rights with one another, larger male Jumping spiders are often more likely to be successful in achieving copulation rights than their smaller rivals. The silk of a jumping spider is not only for "safety harnesses," but is also used as a means of protection for their eggs. One female spider can lay up to 50 eggs per birth, although it is undetermined how many of these will likely make it to adulthood. |
XRF for Today’s Quality Assurance
X-ray Fluorescence (XRF) is an attractive analytical technique for quality assurance.
Modern XRF has become a common analytical instrument technique for the quality assurance of manufactured products, including out-going product verification and production quality control as well as incoming inspection. Functional characteristics that make XRF so attractive to these quality applications are: the broad application to many kinds of materials, the ease of operation of the equipment, the short analysis time (often less than 30 seconds), and the noncontact and nondestructive nature of the technique.
Overview of XRF
XRF is an atomic spectroscopy. The analysis is based on the interaction of incident X-rays that are today mostly generated by an X-ray tube and the material under inspection. British physicist Henry Moseley demonstrated that certain characteristic X-ray emissions from ionized chemical elements were proportional to the atomic number of the elements1. This is the basis of modern XRF spectrometry. The wavelength or energy of fluoresced emissions acquired from a material is characteristic of the elemental composition of the sample material. The energy of the fluoresced peak provides identification of the element. The number of emitted photons at those specific energies represents the number of atoms (mass) of the emitting element that is present in the material.
Applying these basic physics to an analytical instrument requires a source of X-rays; as noted above, this is typically an X-ray tube, a means of determining the energy of the emitted X-rays and a means of counting the fluoresced photons. In earlier instruments and still used today, natural crystals (now often layered devices as well) were placed between the fluorescing sample and a counter, like a Geiger Counter and now gas filled or gas flow proportional counters. The crystals or layered devices disperse the emitted X-rays that can then be collected (counted) at specific angles that identify the wavelength or energy (E~1/λ where E is Energy and λ is wavelength) based on the Bragg equation, nλ=2dSinθ where d is the crystal lattice spacing and θ is the angle of diffraction. XRF spectrometers using dispersing devices are known at Wavelength-Dispersive XRF Spectrometers (WDXRF).
Evolution of XRF spectrometers for QA/QC Applications
As described in the previous section, the first XRF Spectrometers were what we now call WDXRF systems. WDXRF systems are now manufactured with a single detector (or tandem detectors, one to capture long wavelengths and the other shorter wavelengths) and switchable dispersing devices where the detector is driven to various Bragg angles θ around a goniometer circle. These are known as Sequential WDXRF systems. Or, they are configured with multiple, fixed dispersion devices and detectors specific to the desired wavelengths (elements) to be determined. These are known as Simultaneous WDXRF systems because multiple elemental spectra are acquired simultaneously.
Simultaneous systems are commonly used in quality control applications where high sample throughput is required. Common markets/applications for these kinds of spectrometers are steel manufacturing and cement manufacturing where a complete and specific suite of elements can be determined simultaneously and with very good precision in literally seconds per sample. These instruments, due to the optical benches consisting of dispersion devices and detectors that make them up, are floor standing and can be quite large.
The remainder of this article focuses on Energy Dispersive XRF Spectrometers (EDXRF). EDXRF systems are typically smaller than WDXRF systems and can often be installed and used near production, shipping or receiving areas as opposed to in a laboratory. There are also portable handheld EDXRF systems available today.
The name is a bit of a misnomer in that there is no dispersion device. Energies are determined electronically (pulse height discrimination). The level of discrimination or resolution is a function of the type of detector and pulse processor used in the spectrometer. Today, there are three types of detectors commonly use in EDXRF systems: gas filled proportional counter detectors and solid-state silicon detectors (Si-PIN and Silicon Drift Detectors – SDD). The choice of detector is driven by the application(s). From an X-ray spectrometry perspective, this means the material matrix that is being analyzed, i.e., what is the elemental composition of the sample material and what are the nominal concentrations, and for layered systems (thin-film applications) the thickness of the layer(s) as well.
Resolution is expressed in terms of Full-Width-Half-Max (FWHM) at 5.95 keV, which is the energy of the Mn Kα emission. Typical resolutions for the detectors listed above are: 900 eV for prop counters, 200 eV for Si-PIN detectors and 140 eV for SSDs. If the sample matrix is complicated—many element emissions and/or there are heavily overlapped emissions (adjacent atomic numbers)—then Si detectors are preferred or even necessary. If the emissions from the sample are well separated then the prop counter will do a very good job, i.e., Sn coating on a copper substrate. There is a significant cost benefit to the prop counter, if it can do the job. Concentration of an analyte element or a very thin layer will impact detector selection as well. The better resolution Si detectors provide much better peak-to-background (signal-to-noise) response than do prop counters, and therefore, much better detection limits whether in ppm concentration or nanometer thickness.
The SDD is one of the most recent technology advances commercialized in EDXRF. As noted above the SDD provides the best resolution in today’s commercial spectrometers, and therefore, the best detection limits. Another enabling capability of the SDD is sensitivity to lower energy emissions that as demonstrated by Moseley (where we began) are associated to lower atomic number elements. Two demanding applications that SDDs have improved on and in one case enabled are in quality assurance areas of functional coatings for electronics: RoHS compliance and ENEPIG (Electroless Ni-Electroless Pd-Immersion Au) coatings on PCBs and chip connectors.
The RoHS (or anti-RoHS, i.e. exempted defense contractors) applications can range from a very thin Sn-Pb solder finish to Cd based pigments in polymers. Plating companies doing quality control on outgoing parts and contract manufacturers or end-users doing quality assurance for incoming inspection of components want to assure that the parts are RoHS compliant. SDD-configured spectrometers manufactured for small area (chip leads) can reliably and nondestructively measure Pb content down to 120 ppm levels2, even in thin finishes. Detection limits can also be very important to the defense contractor that needs to provide components with greater than 3% Pb content. This may seem like a relatively high concentration until one is measuring thin finishes where the total areal mass of Pb (µm/cm2) is quite low. Figures 1 and 2 show spectra from a prop counter configured system and SDD configured system respectively. The spectra were acquired from the same sample, which is a 371 micro-inch thick Sn coating on Cu substrate having RoHS action Pb content of 1,000 ppm. The Pb is marginally detectable with prop counter, but easily detected and measured with the SDD configured instrument.
RoHS compliant levels of Cd in a plastic or polymer is a challenge, even to Si-PIN configured spectrometers. Low average atomic number matrices have high scatter cross sections to the incident beam, the major contributor to XRF spectrum background. Often primary beam filters are used to remove incident X-rays in the energy region of the analyte peak, thereby, improving the analyte peak-to-background response. Beam filters capable of removing background in the energy region of Cd K emissions are not practical because they would have to be so thick that they would greatly reduce excitation efficiency and analysis times would become impractical. The better resolution of an SDD configured instrument is sufficient to enable Cd measurement at the 100 ppm RoHS compliance level in 300 seconds with 5% precision.
As with many functional film applications, ENi thickness plating control and incoming inspection is an important parameter. Linear thickness measurements in XRF are based on converting the measured mass thickness (mass/unit area) to linear units (microns, microinches). This is achieved by dividing the mass-thickness by the material density, which requires knowledge of the material composition. The phosphorus content the ENi (NiP) layer has been historically assumed, i.e., 6%, 8% or 12% phosphorus, because P (Z=15) emits at a relatively low energy (2.02 keV) where air will absorb much of the signal and the noise characteristics of Si-PIN detectors prohibit the direct measurement of P, unless the sample is measured in a vacuum chamber, which takes time, limits the size of the chamber, and adds cost. With SDD detectors in a closely coupled X-ray tube-to-sample-to-detector design have sufficient sensitivity to measure P content even in air. So that P content, and therefore density, are no longer an assumption and thickness measurement accuracy is improved.
ENi layers are often covered by other layers. Some are at thicknesses that that will completely absorb the phosphorus signal, so that, if the P content is to be determined it must be done prior to adding the final layer(s). ENEPIG coatings that are becoming commonly used in the PCB industry to improve board shelf life and solderability have a thin Pd or PdP layer on the ENi and then a final very thin immersion Au layer. These layers are often thin enough to pass the P signal and sensed with an SDD configured instrument, so that, all three layers can be accurately measured simultaneously with the accuracy and precision required by the recently released IPC IPC-4556 Standard3 (Association Connecting Electronics Industries).
Computing, Software Algorithms & Summary
The technological basis for XRF has been around for a century, but it wasn’t until the introduction of compact and affordable computing (mini computers and PCs) in the 1970s and 80s that XRF spectrometry was able to become widely accepted and the preferred method for compositional analysis of many materials, including the characterization of functional and decorative thin-films—single layer and multi-layer. As a nondestructive technique, measurements are sample matrix dependent. There are physical X-ray matrix effects that can be complicated to deal with. Historically, correcting for these effects required many standards. Today, most spectrometers are supplied with what is termed fundamental parameters (FP) software. FP algorithms use well established X-ray physical constants to correct for matrix effects often without any calibration standards. With fast processing and FP software even complicated sample matrices like the ENEPIG functional coating system described above can be computed in seconds.
Today’s EDXRF spectrometers are compact and fast. FP software and current detector technology have enabled simple solutions to demanding applications, often a single button operation—ideal for many quality measurement requirements.
1 “The High-Frequency Spectra of the Elements,” Philosophical Magazine 26 (1913): 1024-34; 27 (1914): 703-13. 1 plate.
2 Fischer Technology Application Note (AN004en), “Determination of Harmful Substances in Very Small Concentrations – RoHS”
3 IPC Standard 4556, “Specification for Electroless Nickel / Electroless Palladium / Immersion Gold (ENEPIG) Plating for Printed Circuit Boards” |
Orville Wright (1871–1948), American aeronautical engineer famous for his role in the first controlled, powered flight in a heavier-than-air machine and for his participation in the design of the aircraft's control system. Wright worked closely with his brother, Wilbur Wright, in designing and flying the Wright airplanes. See Airplane.
Orville Wright was born in Dayton, Ohio. He and Wilbur attended high school in Dayton, but neither boy formally graduated from high school. While in high school the brothers developed an interest in mechanical things, taught themselves mathematics, and read as much as they could about current developments in engineering. They also made some attempts at editing and printing small local newspapers. In 1892 the brothers formed the Wright Cycle Company; for the next ten years they designed, built, and sold bicycles.
The exploits of Otto Lilienthal, the German pioneer of gliders, inspired the Wrights to begin exploring the possibilities of powered flight in the 1890s. Lilienthal's death in an 1896 glider crash convinced the brothers that they not only must build successful airplanes, but must also learn to fly them correctly. During the next few years, they focused on controlling the direction and stability of an airborne object. In August 1899 they flew a kite with a wingspan of about 1.5 m (about 4.9 ft) and with controls for warping (twisting) the wings to control direction and stability. Their wing-warping method was the forerunner of the later idea of ailerons, flaps that can move independently of airplane wings to steer and stabilize the airplane (see Airplane: Control Components).
In 1900 the Wrights built a larger kite with a 5-m (17-ft) wingspan that could carry a pilot. They chose to test their craft near Kitty Hawk, North Carolina, because the site had suitable steady winds and sandy banks, which would minimize the impact of the craft and pilot upon landing. The kite flew well and Wilbur achieved a few seconds of piloted flight. The following July they returned to Kitty Hawk and built a wooden winged sled at Kill Devil Hills, where there were large sand dunes. Their new machine was longer and had a different wing shape than the previous model. It also had a hand-operated elevator attached to the horizontal tail stabilizer. Again they achieved encouraging results, particularly after further alterations to the wing arch, but there were still problems with stability and control.
During the following winter Orville Wright designed and built a small wind tunnel and tested various wing designs and arches. In the course of these tests the Wrights compiled the first accurate tables of lift and drag, the important parameters that govern flight and stability. By winter’s end the brothers had built a new glider that had a 10-m (32-ft) wingspan and had, at first, a double vertical fin mounted behind the wings. Turning was still difficult, however, and they converted the fin to a single movable rudder operated by the wing-warping controls. This configuration proved so successful that they decided to attempt powered flight the following summer. During the winter of 1902 they searched in vain for a suitable engine for their craft and for information about propeller design. They eventually constructed their own 8.9-kilowatt (12-horsepower) motor and made their own efficient propeller. After some initial trouble with the propeller shafts, the so-called Wright biplane took to the air and made a successful flight on December 17, 1903, at Kill Devil Hills near Kitty Hawk. The airplane had a wingspan of 12 m (39 ft) and weighed 340 kg (750 lb), including the pilot. The two brothers took turns flying the plane. Orville made the first successful flight, which lasted 12 seconds.
The following year the Wrights incorporated a 12-kilowatt (16-horsepower) engine and separated the wing-warping controls from the rudder controls. They flew this new aircraft at their home town of Dayton, learning to make longer flights and tighter turns.
In 1905 the Wrights had enough confidence in their design to offer it to the United States War Department. The following year they patented their control system of elevator, rudder, and wing-warping. Although they spent time patenting and finding markets for their machines during the next few years, they did not exhibit them publicly until 1908. That year Orville demonstrated the airplanes in the United States, setting several records when he kept the plane aloft for more than an hour on September 9. In 1909 the Wrights demonstrated their airplanes in Europe. The United States and European governments put in many orders for Wright airplanes, and the Wrights needed a manufacturing plant. In 1909 they formed the Wright Company to manufacture their airplanes.
Orville became president of the Wright Company after Wilbur’s death in 1912, but in 1915 he sold his interest in the company to pursue aviation research. He eventually became a member of the National Advisory Committee on Aeronautics. By the time of his death Wright had received many awards and honors for the momentous achievement of the Wright brothers. |
In this article for teachers, Bernard gives an example of taking an
initial activity and getting questions going that lead to other
Bernard Bagnall looks at what 'problem solving' might really mean
in the context of primary classrooms.
Many natural systems appear to be in equilibrium until suddenly a critical point is reached, setting up a mudslide or an avalanche or an earthquake. In this project, students will use a simple. . . .
Explore the different tunes you can make with these five gourds.
What are the similarities and differences between the two tunes you
Explore Alex's number plumber. What questions would you like to ask? Don't forget to keep visiting NRICH projects site for the latest developments and questions.
Use the interactivity to investigate what kinds of triangles can be
drawn on peg boards with different numbers of pegs.
Sort the houses in my street into different groups. Can you do it in any other ways?
Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour.
Use your mouse to move the red and green parts of this disc. Can
you make images which show the turnings described?
Bernard Bagnall describes how to get more out of some favourite
In this challenge, you will work in a group to investigate circular
fences enclosing trees that are planted in square or triangular
Have a go at this 3D extension to the Pebbles problem.
Why does the tower look a different size in each of these pictures?
This problem is intended to get children to look really hard at something they will see many times in the next few months.
This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares.
This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items.
This challenge extends the Plants investigation so now four or more children are involved.
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
We think this 3x3 version of the game is often harder than the 5x5 version. Do you agree? If so, why do you think that might be?
A challenging activity focusing on finding all possible ways of stacking rods.
Use the interactivity to find all the different right-angled triangles you can make by just moving one corner of the starting triangle.
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
Here is your chance to investigate the number 28 using shapes,
cubes ... in fact anything at all.
Using different numbers of sticks, how many different triangles are
you able to make? Can you make any rules about the numbers of
sticks that make the most triangles?
Can you find ways of joining cubes together so that 28 faces are
How many different ways can you find of fitting five hexagons
together? How will you know you have found all the ways?
48 is called an abundant number because it is less than the sum of its factors (without itself). Can you find some more abundant numbers?
Is there a best way to stack cans? What do different supermarkets
do? How high can you safely stack the cans?
If the answer's 2010, what could the question be?
Follow the directions for circling numbers in the matrix. Add all
the circled numbers together. Note your answer. Try again with a
different starting number. What do you notice?
Make new patterns from simple turning instructions. You can have a go using pencil and paper or with a floor robot.
What is the largest number of circles we can fit into the frame
without them overlapping? How do you know? What will happen if you
try the other shapes?
Complete these two jigsaws then put one on top of the other. What
happens when you add the 'touching' numbers? What happens when you
change the position of the jigsaws?
Start with four numbers at the corners of a square and put the
total of two corners in the middle of that side. Keep going... Can
you estimate what the size of the last four numbers will be?
I cut this square into two different shapes. What can you say about
the relationship between them?
Compare the numbers of particular tiles in one or all of these
three designs, inspired by the floor tiles of a church in
This article for teachers suggests ideas for activities built around 10 and 2010.
An investigation that gives you the opportunity to make and justify
What is the largest cuboid you can wrap in an A3 sheet of paper?
The challenge here is to find as many routes as you can for a fence
to go so that this town is divided up into two halves, each with 8
In this investigation, you are challenged to make mobile phone
numbers which are easy to remember. What happens if you make a
sequence adding 2 each time?
What is the smallest cuboid that you can put in this box so that
you cannot fit another that's the same into it?
Can you find out how the 6-triangle shape is transformed in these
tessellations? Will the tessellations go on for ever? Why or why
The red ring is inside the blue ring in this picture. Can you rearrange the rings in different ways? Perhaps you can overlap them or put one outside another?
What do these two triangles have in common? How are they related?
All types of mathematical problems serve a useful purpose in
mathematics teaching, but different types of problem will achieve
different learning objectives. In generalmore open-ended problems
have. . . .
This activity asks you to collect information about the birds you
see in the garden. Are there patterns in the data or do the birds
seem to visit randomly?
Investigate the number of faces you can see when you arrange three cubes in different ways.
These caterpillars have 16 parts. What different shapes do they make if each part lies in the small squares of a 4 by 4 square?
Let's suppose that you are going to have a magazine which has 16
pages of A5 size. Can you find some different ways to make these
pages? Investigate the pattern for each if you number the pages. |
Life has evolved for several billion years with a reliable cycle of bright light from the Sun during the day, and darkness at night. This has led to the development of an innate circadian rhythm in our physiology; that circadian rhythm depends on the solar cycle of night and day to maintain its precision. During the night, beginning at about sunset, body temperature drops, metabolism slows, hunger abates, sleepiness increases, and the hormone melatonin rises dramatically in the blood. This natural physiological transition to night is of ancient origin, and melatonin is crucial for the transition to proceed as it should.
Evidence suggests that circadian disruption from over-lighting the night could be related to risk of obesity and depression as well. In fact, it might be that virtually all aspects of health and wellbeing are dependent to one extent or another on a synchronised circadian rhythmicity, with a natural cycle of bright days and dark nights.
LED technology is not the problem, per se. In fact, LED will probably be a large part of the solution because of its versatility. The issue in street lighting is that the particular products being pushed by utility companies are very strong in the blue – and they don’t have to be. Different LED products can be marketed that are much more friendly to the environment and our circadian health. This is of paramount importance when lighting the inside of buildings where we live and work. |
Main Difference – PCR vs QPCR
PCR (polymerase chain reaction) and qPCR (quantitative PCR) are two techniques used in biotechnology to amplify DNA for various purposes. PCR is a relatively a simple technique. qPCR is also known as real-time PCR or digital PCR. The main difference between PCR and qPCR is that PCR is a qualitative technique whereas qPCR is a quantitative technique. PCR allows reading the result as “presence or absence’. But in qPCR, the amount of DNA amplified in each cycle are quantified. If RNA is used in PCR, the technique is known as RT-PCR (reverse transcription PCR), and if RNA is used in qPCR, the technique is known as qRT-PC.
Key Areas Covered
1. What is PCR
– Definition, Processes, Uses
2. What is QPCR
– Definition, Processes, Uses
3. What are the Similarities Between PCR and QPCR
– Outline of Common Features
4. What is the Difference Between PCR and QPCR
– Comparison of Key Differences
Key Terms: Agarose Gel Electrophoresis, Amplicons, DNA polymerase, Fluorescent Dye, PCR, Probes, qPCR, RT-qPCR
What is PCR
PCR refers to a technique in biotechnology that allows the analysis of a short sequence of DNA by amplifying a selected segment of DNA. It is comparatively a sensitive method as very small volumes are required by a single reaction. The technique is based on the ability of DNA polymerase to synthesize new strands of DNA to the offered template strand in a complementary manner. The reaction mixture of PCR is composed of DNA polymerase, DNA nucleotides, primers, the DNA template to be amplified, and magnesium. The amplification is carried out inside a thermocycler. DNA polymerase should be heat resistant as high temperatures are used in this reaction. The two types of DNA polymerases used in PCR are Taq DNA polymerase and Pfu DNA polymerase. Taq DNA polymerase is widely used in PCR.
DNA polymerase requires a pre-existing DNA strand at the 3′ end to synthesize a new strand. Hence, an oligonucleotide primer is added to the reaction mixture for the initiation of DNA synthesis. The requirement of a primer in PCR allows the amplification of only a specific region in the template. The target sequence is flanked by forward and reverse primers. At the end of a PCR, new copies of a specific DNA sequence, which are called amplicons, are accumulated in billions. The components of the PCR should be optimized in such a way to improve PCR performance while minimizing failure. The standard PCR reaction is shown in figure 1.
Steps of PCR
The three steps of a PCR is described below.
- Denaturation – The double-stranded DNA template is separated into two single strands by heating to 94-95 °C.
- Annealing – Forward and reverse primers bind to the complementary sequences in the template. The temperature depends on the melting temperature of the primer combination.
- Primer extension – DNA polymerase enzyme extends each primer at their 3’end by adding complementary bases to the growing strand. The optimum temperature of Taq polymerase, i.e., 72 °C, is used as the temperature in the extension step. The time of the extension depends on the number of base pairs in the template strand.
The three steps are repeated for 28-35 times. Agarose gel electrophoresis is used in the size fractionation of PCR products. The product is stained by ethidium bromide and is observed under UV. The PCR product or the amplified DNA can be used in cloning, sequencing or genotyping.
What is QPCR
QPCR refers to a technique in biotechnology that allows the detection, characterization, and quantification of nucleic acids for various applications. Hence, it is a type of quantitative PCR. Both DNA and RNA can be used as qPCR. If RNA is used as the template, it should be first reverse transcribed into cDNA. Thus, this type of qPCR is known as RT-qPCR. The traditional PCR is carried out for cDNA or usual DNA sample. However, in qPCR, fluorescent dyes are used to label the PCR product in each step of the PCR cycle. This enables the collection of data as PCR progresses, allowing the quantification of the amplicons during the exponential phase of the PCR. The main type of dye used in qPCR is SYBR Green. The dye binds to the double-stranded DNA. As the fluorescence is increased proportionally with the amount of amplified DNA, the quantification can be done in “real time”. The main disadvantage of the use of the dye is that it allows the quantification of one specific product in the sample. In addition to dyes, probes can also be used in the quantification process. TaqMan probes are one of the main types of oligonucleotide probes used in qPCR, and the process of fluorescence emission is shown in figure 2.
Probes can be designed in such a way to detect several PCR products within the same sample. TaqMan probe is one of the main types of hydrolysis probes; the incorporation of this probe into the PCR product exposes the fluorophore, emitting the fluorescence. Fluorescent dyes are more specific to the PCR product. Hence, they are used in most diagnostic assays to detect the PCR product.
Similarities Between PCR and QPCR
- PCR and qPCR are two types of techniques used in biotechnology to amplify DNA for various purposes.
- The traditional polymerase chain reaction is performed as the core technique in both PCR and qPCR.
- RNA can be used in both PCR and qPCR by using reverse transcription as the first reaction.
Difference Between PCR and QPCR
PCR: PCR is a technique in biotechnology that allows the analysis of a short sequence of DNA by amplifying a selected segment of DNA.
QPCR: QPCR is a technique in biotechnology that allows the detection, characterization, and quantification of nucleic acids for various applications.
PCR: PCR is qualitative technique.
QPCR: QPCR is a quantitative technique.
Detection of the Product
PCR: The product is detected by the agarose gel electrophoresis in PCR.
QPCR: The product can be detected in each amplification cycle in qPCR.
Collection of Data
PCR: The data is collected at the end of the reaction in PCR.
QPCR: The data is collected during the exponential phase of the reaction in qPCR.
PCR: PCR has a very poor resolution.
QPCR: QPCR has a very high resolution.
PCR: PCR uses ethidium bromide to stain the product during PCR.
QPCR: QPCR uses fluorescent dyes to detect the product.
PCR: PCR is a more time-consuming method.
QPCR: QPCR is less time-consuming.
PCR: RT-PCR is the type of PCR that uses RNA as the template.
QPCR: RT-qPCR is the type of qPCR that uses RNA as the template.
PCR: PCR is used to detect the presence or absence of certain genomic fragments.
QPCR: QPCR is used to quantify a particular fragment in a sample.
PCR and qPCR are two types of techniques used in biotechnology to amplify DNA for various purposes. PCR is the traditional amplification method used to identify the presence or absence of a DNA fragment. QPCR is used to quantify a particular fragment in a sample. Thus, PCR is a qualitative technique whereas qPCR is a quantitative technique. This the main difference between PCR and qPCR.
1. “QPCR vs. Digital PCR vs. Traditional PCR.” Thermo Fisher Scientific, Available here. |
One day soon, researchers hope to be able to sprinkle hundreds or thousands of sensor nodes liberally across buildings or landscapes and have them organize themselves into powerful wireless networks. But before this vision can be realized, sensor nodes must be pared down to a smaller, simpler, and less costly form. "Cheap components are not as reliable, but if one dies you have others, and you get a better total picture," notes Jan Rabaey
, a professor of electrical engineering whose specialty is building tiny, low-power radios. "Lots of cheap components can do a better job than a few, very expensive ones."
EECS Professor Jan Rabaey: A recent version of Rabaey's PicoRadio operates on a scant 60 microwatts of power, making it the lowest-power radio in existence.(Photo by Bart Nagel)
Having large numbers of cheap nodes also poses barriers to networking and wireless communication, however. Cheap nodes lack precise controls and, as the number of nodes increases, so does interference.
At Berkeley, several faculty members are devising smart wireless networking technology to overcome these hurdles and others. For example, Rabaey is devising algorithms to enable cheap radios to communicate, while David Tse
, a professor of electrical engineering, has figured out a way that nodes can work cooperatively to increase the wireless communication capacity of large-scale, ad hoc networks.
The most expensive component of a wireless sensor node tends to be the radio. One way of lowering a radio's cost and power use, notes Rabaey, is to leave out the crystal—a very precise frequency element. Without a crystal, however, a radio cannot broadcast and receive at a predetermined broadcast frequency. In order to communicate, crystal-less nodes must somehow find a common channel.
Similar communication problems arise in the natural world. For example, for reasons that aren't completely understood, crickets have been observed to adjust their chirps to the surrounding ones until they sing in unison. Also, tropical fireflies synchronize the timing of their luminescent flashes by continually shifting to the average of their nearest neighbors.
Rabaey and colleagues are implementing a similar algorithm for clusters of wireless radios. Each radio pulses every second at some frequency. When a radio is not transmitting, it listens and centers its pulse relative to those of the other nodes. There is a tradeoff between how quickly the radios achieve synchrony and how accurately. To balance both, the researchers use a modal approach: first they achieve a rough synchrony quickly, and then they adjust to the desired level of accuracy.
Meanwhile, Tse is working on the information-theoretic challenges of building large-scale wireless networks. A famous paper by Piyush Gupta and P. R. Kumar identified a fundamental limit to the communication capacity of such networks: As the number of nodes increases, so does interference, pushing the information throughput per node down to zero.
Tse's recent work provides a possible workaround. Together with Ayfer Özgür and Olivier Lévêque
at the Ecole Polytechnique Fédérale de Lausanne in Switzerland, Tse showed
that if groups of nodes work together, the throughput scales linearly, which means that having many point-to-point conversations is just as easy as having only a few.
The Gupta and Kumar paper focused on the task of enabling multiple, simultaneous, point-to-point conversations in a bounded area, a problem that comes up in cell phone networks designed for emergencies and remote locales in which each cell phone relays calls to other phones. Tse's approach to the problem made use of MIMO, which stands for "Multiple Input, Multiple Output." MIMO provides a way to significantly increase data throughput, without additional bandwidth or transmit power, by using multiple antennas to send and receive the data.
Tse and his collaborators assumed that each node of the network has one antenna but can cooperate with nearby nodes. They subdivided the network into hierarchical clusters of nodes with each cluster at a given level sub-divided into smaller clusters at the next one. At the lowest level, nodes within each cluster take turns transmitting a message to all the others. The messages are then combined into a larger group message, which is exchanged among multiple clusters via MIMO. This enables cooperation within clusters at the next higher level of the hierarchy. Continuing this way, the researchers achieve cooperation across the entire network with minimal overhead. "Cooperation mitigates interference," Tse says. |
Measles is a highly contagious disease, caused by a virus called paramyxo. This disease mostly affects children but occasionally they do strike adults. Measles is mainly a respiratory infection.
Vaccination is the only prevention against measles. The child should be vaccinated around the first year. This vaccine is often combined with two other vaccines of rubella and mumps.
It should be noted that once a person is inflicted with measles, he/she develops immunity to this disease for life.
Is Measles contagious?
Measles is highly contagious. The virus spreads through the air by sneezing. Thus, a non-immune person could be infected simply by breathing in the infected air.
Measles is contagious for about four days before the rash appears and about five days later. Therefore parents, we must be vigilant and the child with measles must be immediately isolated from other family members.
Measles incubation period:
People inflicted with measles virus shows no signs of illness for about first ten days. This is the incubation period of measles virus. After this stage, high fever appears, and is often accompanied by signs like cough, runny nose and red eyes. Tiny red spots appear on the face and gradually spread on to other parts of the body. The appearance of red spots is accompanied by itching and vomiting in a few cases.
Measles is caused by a virus and is easily transmitted in the early stages through droplets of moisture discharged through coughs and sneezing by the patient. The virus attacks people with weak immunity which could result from malnutrition, vitamin A deficiency, unhygienic living conditions and bad food habits.
Symptoms of measles:
The first symptom of measles may appear in seven to fourteen days after virus infection. The rashes may be of small round spots on the skin, which may be of pink color initially and they grow to dark red in color as times passes. These rashes initially appear on the sides of the face and the neck, gradually spreading all over the body.
Other allied symptoms of measles include high fever during this period, which may rise to 104 degrees in acute cases, running nose, sore throat, continuous dry cough, redness and watering of the eyes, loss of appetite and
diarrhea or vomiting and delirium in severe cases.
Steps to follow for treatment and prevention of measles
If the characteristic signs of measles occur in your child, the first thing to do is see your doctor. Consulting a physician helps to alleviate symptoms of the disease and to avoid potential complications.
Measles is a disease that the body will fight on its own and it will be cured in some time. There are no drugs that can fight the virus, so the only useful treatments are those that help relieve symptoms. But doctors generally prescribe medicines to cure sore throat or cough, antibiotics to curb bacterial infection, analgesics to reduce fever and Vitamin A supplements to reduce the risk of brain infection.
Failure to take preventive measures can result in serious complications like dehydration, respiratory infections (like pneumonia), ear infection and even encephalitis in worst cases.
Bed rest: Rest is highly recommended to the patient with measles. The child must then drink plenty of water and eat properly.
Remedies for Measles
The following are effective natural cure for measles:
Liquorice: Liquorice is a very effective natural cure for measles. Powder liquorice root and add honey to this powder, take half a teaspoon of each daily for better results
Tamarind seeds and turmeric: Tamarind seeds and turmeric are very effective natural cure for measles. Powder tamarind seeds, mix equal portion of the tamarind seeds powder with turmeric powder , this mix should be given in the doses of 350gm to 425gm, three times, daily to the person suffering from measles.
Margosa leaves: Margosa leaves contain antiviral and antiseptic properties, which are effective in treating measles. Margosa leaves can be added to hot bathing water ,to relieve the patient of itching , that comes with the rash. Patient should be immersed in the water for at least 20minutes, for better results.
Garlic: Garlic also can be an effective natural cure for measles. Powder the cloves of garlic and mix with honey and this can be had daily for better results.
Lemon juice: Lemon juice is also a very effective natural cure for measles. Take about 15-25ml of lemon juice, mixed with water can be an effective cure for measles.
Bitter gourd leaves and turmeric root: Powder turmeric root and mix with honey and add the juice of bitter gourd leaves. This is a very effective natural cure for measles.
Coconut water and flesh: Coconut water has nutrients and natural sugar , which help in cleansing the body of toxic elements , which makes it a very effective drink. Coconut flesh is rich in anti-oxidants and a liberal intake of coconut water helps in speedy recovery from measles.
Indian gooseberry (Amla): Indian gooseberry powder mixed with water is very effective in getting rid of itching and burning sensation during measles. Amla powder mixed with water can also be used to wash the body.
Barley: Barley water is a very effective natural cure to treat the coughs, in measles. The patient should drink barley water flavored with a few drops of sweetened almond oil as frequently as possible.
Calendula flowers: Calendula flowers, contains essential minerals, which help in speeding the healing process of measles. Boil three cups of water and add one tablespoon of the powdered flower. Drain and drink twice a day till the symptoms of measles exist. Peppermint oil and/or sugar can be added, to make it more effective and palatable.
Egg plant seeds: Egg plant seeds help in developing immunity against measles. The patient can be given about half to one gram of the seeds daily for three days, for better results.
Orange juice: Orange is an excellent natural cure for measles. As the patient will feel a loss of appetite and the lack of saliva in his tongue, often the patient does not feel thirsty or hungry. Orange juice makes good for this loss of appetite.
Butter: If measles is accompanied by fever, mix butter and sugar candy in equal quantity and lick 2 tsp. in the morning.
Precautions in handling a measles patient
- Measles patient must be kept in isolation, for the disease is infectious and should be given complete rest, which will facilitate speedy recovery.
- The patient can be given lots of warm water to drink, as this flushes the toxins out of the system. For better results, drink them in the mornings on empty stomach and in the evenings.
- Initially, the patient should be given juices of fruits like orange and lemon, as the person experiences loss of appetite, this would be sufficient.
- Gradually, the patient can start eating fruits and well balanced diet, which includes lots of green vegetables and fruits, to boost the immune system.
- Measles patient should be kept in well ventilated room.
- Avoid sunlight as measles damages eye tissues which results in water eyes. The room should have subdued light, so as to not further cause damage to the eyes.
- Avoid milk, milk products, as they may worsen the condition. If the child is heavily dependent on milk, limit the intake.
- Ensure that the room and the surroundings are clean and tidy
Related to your query:
- Ayurvedic Cure for Measles
- Ayurvedic Cure for Mumps
- 12 Most Effective Home Remedies for Mumps
- Ayurveda Cure for Fevers
- 22 Best Natural Home Remedies for Vitiligo treatment
- 15 Best Home Cure for Urinary Tract Infection | Home remedies UTI |
Prof. Blaskó Lajos (2008)
Debreceni Egyetem a TÁMOP 4.1.2 pályázat keretein belül
The main source of charge on clay minerals is isomorphous substitution which confers permanent charge on the surface of most layer silicates.
Ionization of hydroxyl groups on the surface of other soil colloids and organic matter can result in what is describes as pH dependent charges-mainly due to the dependent on the pH of the soil environment. Unlike permanent charges developed by isomorphous substitution, pH-dependent charges are variable and increase with increasing pH.
Presence of surface and broken - edge -OH groups gives the kaolinite clay particles their electronegativity and their capacity to absorb cations. In most soils there is a combination of constant and variable charge. Cation-a positively charged ion There are two types of cations, acidic or acid-forming cations, and basic, or alkaline-forming cations. The Hydrogen cation H+ and the Aluminum cation Al+++ are acid-forming.
The positively charged nutrients that we are mainly concerned with here are Calcium, Magnesium, Potassium and Sodium. These are all alkaline cations, also called basic cations or bases. Both types of cations may be adsorbed onto either a clay particle or soil organic matter (SOM). All of the nutrients in the soil need to be held there somehow, or they will just wash away when you water the garden or get a good rainstorm. Clay particles almost always have a negative (-) charge, so they attract and hold positively (+) charged nutrients and non-nutrients. Soil organic matter (SOM) has both positive and negative charges, so it can hold on to both cations and anions.( http://www.soilminerals.com/Cation_Exchange_Simplified.htm)
Anion-a negatively charged ion (NO3-, PO42-, SO42-, etc...)
Soil particles and organic matter have negative charges on their surfaces. Mineral cations can adsorb to the negative surface charges or the inorganic and organic soil particles. Once adsorbed, these minerals are not easily lost when the soil is leached by water and they also provide a nutrient reserve available to plant roots.
These minerals can then be replaced or exchanged by other cations (i.e., cation exchange)
vThe exchage processes (Figure 23) are REVERSIBLE (unless something precipitates, volatilizes, or is strongly adsorbed).
CEC is highly dependent upon soil texture and organic matter content Table 3, 4.). In general, the more clay and organic matter in the soil, the higher the CEC. Clay content is important because these small particles have a high ration of surface area to volume. Different types of clays also vary in CEC. Smectites have the highest CEC (80-100 millequivalents 100 g-1), followed by illites (15-40 meq 100 g-1) and kaolinites (3-15 meq 100 g-1). |
A green sea slug appears to be part animal, part plant. It's the first critter discovered to produce the plant pigment chlorophyll.
The sneaky slugs seem to have stolen the genes that enable this skill from algae that they've eaten. With their contraband genes, the slugs can carry out photosynthesis — the process plants use to convert sunlight into energy.
"They can make their energy-containing molecules without having to eat anything," said Sidney Pierce, a biologist at the University of South Florida in Tampa.
Pierce has been studying the unique creatures, officially called Elysia chlorotica, for about 20 years. He presented his most recent findings Jan. 7 at the annual meeting of the Society for Integrative and Comparative Biology in Seattle. The finding was first reported by Science News.
"This is the first time that multicellar animals have been able to produce chlorophyll," Pierce told LiveScience.
The sea slugs live in salt marshes in New England and Canada. In addition to burglarizing the genes needed to make the green pigment chlorophyll, the slugs also steal tiny cell parts called chloroplasts, which they use to conduct photosynthesis. The chloroplasts use the chlorophyl to convert sunlight into energy, just as plants do, eliminating the need to eat food to gain energy.
"We collect them and we keep them in aquaria for months," Pierce said. "As long as we shine a light on them for 12 hours a day, they can survive [without food]."
The researchers used a radioactive tracer to be sure that the slugs are actually producing the chlorophyll themselves, as opposed to just stealing the ready-made pigment from algae. In fact, the slugs incorporate the genetic material so well, they pass it on to further generations of slugs.
The babies of thieving slugs retain the ability to produce their own chlorophyll, though they can't carry out photosynthesis until they've eaten enough algae to steal the necessary chloroplasts, which they can't yet produce on their own.
The slugs accomplishment is quite a feat, and scientists aren't yet sure how the animals actually appropriate the genes they need.
"It certainly is possible that DNA from one species can get into another species, as these slugs have clearly shown," Pierce said. "But the mechanisms are still unknown."
- Top 10 Scariest Sea Creatures
- Hard Labor: How 10 Animals Struggle to Survive
- Amazing Animal Abilities |
- Introduction: What are stem cells, and why are they important?
- What are the unique properties of all stem cells?
- What are embryonic stem cells?
- What are adult stem cells?
- What are the similarities and differences between embryonic and adult stem cells?
- What are induced pluripotent stem cells?
- What are the potential uses of human stem cells and the obstacles that must be overcome before these potential uses will be realized?
- Where can I get more information?
V. What are the similarities and differences between embryonic and adult stem cells?
Human embryonic and adult stem cells each have advantages and disadvantages regarding potential use for cell-based regenerative therapies. One major difference between adult and embryonic stem cells is their different abilities in the number and type of differentiated cell types they can become. Embryonic stem cells can become all cell types of the body because they are pluripotent. Adult stem cells are thought to be limited to differentiating into different cell types of their tissue of origin.
Embryonic stem cells can be grown relatively easily in culture. Adult stem cells are rare in mature tissues, so isolating these cells from an adult tissue is challenging, and methods to expand their numbers in cell culture have not yet been worked out. This is an important distinction, as large numbers of cells are needed for stem cell replacement therapies.
Scientists believe that tissues derived from embryonic and adult stem cells may differ in the likelihood of being rejected after transplantation. We don't yet know for certain whether tissues derived from embryonic stem cells would cause transplant rejection, since relatively few clinical trials have tested the safety of transplanted cells derived from hESCS.
Adult stem cells, and tissues derived from them, are currently believed less likely to initiate rejection after transplantation. This is because a patient's own cells could be expanded in culture, coaxed into assuming a specific cell type (differentiation), and then reintroduced into the patient. The use of adult stem cells and tissues derived from the patient's own adult stem cells would mean that the cells are less likely to be rejected by the immune system. This represents a significant advantage, as immune rejection can be circumvented only by continuous administration of immunosuppressive drugs, and the drugs themselves may cause deleterious side effects. |
Most people probably do not recognize a distinct difference between the terms “conflict” and “dispute.” However, many conflict scholars do draw a distinction between the two terms. As is unfortunately common in this field, different scholars define the terms in different ways, leading to confusion.
One way that is particularly useful, however, is the distinction made by John Burton which distinguishes the two based on time and issues in contention. Disputes, Burton suggests are short-term disagreements that are relatively easy to resolve. Long-term, deep-rooted problems that involve seemingly non-negotiable issues and are resistant to resolution are what Burton refers to as conflicts. Though both types of disagreement can occur independently of one another, they may also be connected. In fact, one way to think about the difference between them is that short-term disputes may exist within a larger, longer conflict. A similar concept would be the notion of battles, which occur within the broader context of a war.
Following Burton’s distinction, disputes involve interests that are negotiable. That means it is possible to find a solution that at least partially meets the interests and needs of both sides. For example, it generally is possible to find an agreeable price for a piece of merchandise. The seller may want more, the buyer may want to pay less, but eventually they can agree on a price that is acceptable to both. Likewise, co-workers may disagree about who is to do what task in an office. After negotiating, each may have to do something they did not want to do, but in exchange they will get enough of what they did want to settle the dispute (see compromise).
Long-term conflicts, on the other hand, usually involve non-negotiable issues. They may involve deep-rooted moral or value differences, high-stakes distributional questions, or conflicts about who dominates whom. Fundamental human psychological needs for identity, security, and recognition are often at issue as well. None of these issues are negotiable. People will not compromise fundamental values. They will not give up their chance for a better life by submitting to continued injustice or domination, nor will they change or give up their self-identity. Deep-rooted conflicts over these types of issues tend to be drawn out and highly resistant to resolution, often escalating or evolving into intractable conflicts.
A Clarifying Example — The Cold War
While many disputes stand alone and are settled permanently, others are part of a continuing long-term conflict. Looking back at events that represent concrete manifestations of the Cold War between the United States and U.S.S.R. provides a good example of this idea. For example, each round of Strategic Arms Limitation Talks, the Cuban Missile Crisis, the U.S.-Vietnam War, and the Soviet invasion of Afghanistan all constitute disputes within the broader conflict of the Cold War. The Vietnam War was extremely serious and relatively long, but nonetheless was a short-term conflict or “dispute” in the context of the Cold War, which played out over more than 40 years. However, as this example illustrates, even the most resolution-resistant conflicts can be transformed and resolved. While the U.S. and Russia are not “best friends” today, their relationship is certainly much more positive now than it was during the Cold War. Moreover, expectations for a U.S.-Russian war are now far more remote.
Other Distinctions between Conflict and Dispute
Costintino and Merchant1 define conflict as the fundamental disagreement between two parties, of which a dispute is one possible outcome. (Conciliation, conflict avoidance, or capitulation are other outcomes.) This is similar to Douglas Yarn’s observation that conflict is a state, rather than a process. People who have opposing interests, values, or needs are in a state of conflict, which may be latent (meaning not acted upon) or manifest, in which case it is brought forward in the form of a dispute or disputing process. In this sense, “a conflict can exist without a dispute, but a dispute cannot exist without a conflict.”2
Implications for Intractable Conflicts
Although all of these definitions have merit, most scholars agree that intractable conflicts are deep-rooted, protracted, and resistant to resolution. However, there are ups and downs in the life of such conflicts. Episodes occur in which the fighting (physical or psychological) is intense; at other times it subsides. The view that each intense period is a dispute which ends when the dispute (though not the conflict) is settled or resolved is a useful way to distinguish the normal ebb and flow of intractable conflicts.
See, for example, the figure below. This figure illustrates the relationship in an imaginary dispute between two ethnic groups in a post-colonial society named Dufountain.3 The two groups in this hypothetical country are the “Duists” and the “Fountists.” Time runs from left to right. Each of the sets of fat arrows represents one “dispute.” In this illustration, five disputes occur. The first one results in improved policies for the Duists (shown by the solid black arrows going up toward the top of the page). The next two benefit the Fountists. The fourth one benefits the Duists, while the final dispute on this diagram favors the Fountists again. None of the disputes resolves the long-term, underlying conflict (represented by the thick horizontal arrow at the bottom of the diagram; the dispute settlements only alter social policies for a time in a way that favors one group more than another. Whenever the losing group believes that it has gained enough power to prevail in a later dispute, it will most likely try again to engage the opponent and force an outcome that is more favorable to them than the earlier dispute outcome was. For this reason, dispute settlement is not the same thing as conflict resolution. One is a temporary settlement of an immediate problem; the other is a long-term settlement of an underlying long-running conflict. |
The Canadian Museum for Human Rights is known with its excellence in working with children and young people.
Be an Upstander School Program is adapted to different age groups and school levels.
Program length: 90 minutes
Students explore the contributions of outstanding Canadian human rights defenders and participate in an interactive game to inspire them to stand up for their rights and the rights of others.
Students learn about the Universal Declaration of Human Rights with particular attention to the role of Canadian John Humphrey in Turning Points for Humanity on Level 4.
They experience exhibits that look at the personal life journeys of Canadian and international human rights defenders, and how they became upstanders for human rights in Rights Today on Level 5.
Students participate in a digital interactive game on how every action counts. Students are presented with scenarios and make choices on issues. In doing this, they explore how they can have a direct role in creating a society where everyone is treated with dignity and respect. This activity takes place in Actions Count on Level 4.
Students discuss how human rights affect us all and how each one of us can make a difference in Inspiring Change on Level 7.
For more information please visit: https://humanrights.ca/school-program/be-an-upstander
The museum developed the Be an Upstander resource which is a project-based learning unit designed to complement the Be an Upstander school program. This interactive resource is designed for students in middle school and encourages inquiry and action on human rights issues.
Language: English, French |
Students write longer compositions while studying the different kinds of composition: descriptive, narrative, expository, compare and contrast, and persuasive.
Mechanics of Composition
Catholic Homeschool High School Course
This semester course of composition begins with a review of the basic characteristics of a good paragraph, including topic sentences, unity and coherence in a paragraph, and the use of relevant supporting details.
ENG122 | Credit: 1/2 | Prerequisites: None |
The following map shows the current position of the Sun and the Moon. It shows what areas of the Earth are in daylight and which are at night.
The map shows the position of the moon on the selected date and time, but the moon phase corresponding to that date is not shown. If you want to know the moon phase you can use our lunar phase calendar.
The map shows the position of the sun and the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, you can use our solar calendar.
Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated.
Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment. |
Blood Test: Thyroglobulin Antibodies (TgAb)
What It Is
A thyroglobulin antibodies (TgAb) test is used to check blood levels of antibodies the body has made against the compound thyroglobulin. Thyroglobulin is a protein produced and used by the thyroid gland (the small, butterfly-shaped gland in the neck) to make the hormones triiodothyronine (T3) and thyroxine (T4), both of which help control metabolism and growth.
Ordinarily, a healthy immune system wouldn't make significant levels of antibodies against thyroglobulin, because it's not "foreign," but rather a necessary component of thyroid functioning.
Antibodies are proteins made by the immune system to fight bacteria, viruses, and toxins.
In autoimmune diseases, however, the immune system malfunctions, mistakenly attacking healthy organs and tissues as though they were foreign invaders. In people with certain thyroid-related autoimmune conditions, the blood level of thyroglobulin antibodies may rise.
Why It's Done
The thyroglobulin antibodies test is used primarily to help diagnose autoimmune conditions involving the thyroid gland. The test may be ordered when a child has symptoms of a thyroid disorder, including thyroiditis (inflammation of the thyroid) or goiter (an enlarged thyroid), or if tests to check blood levels of T3, T4, or thyroid stimulating hormone (TSH) showed abnormalities.
No special preparations are needed for this test. On the day of the test, having your child wear a T-shirt or short-sleeved shirt can make things easier for your child and the technician who will be drawing the blood.
A health professional will draw the blood from a vein, after cleaning the skin surface with antiseptic, and placing an elastic band (tourniquet) around the upper arm to apply pressure and cause the veins to swell with blood. A needle is inserted into a vein (usually in the arm inside of the elbow or on the back of the hand) and blood is withdrawn and collected in a vial or syringe.
After the procedure, the elastic band is removed. Once the blood has been collected, the needle is removed and the area is covered with cotton or a bandage to stop the bleeding. Collecting the blood for the test will only take a few minutes.
What to Expect
Collecting a sample of blood for this test is only temporarily uncomfortable and can feel like a quick pinprick. Afterward, there may be some mild bruising, which should go away in a few days.
Getting the Results
The blood sample will be processed by a machine, and results are commonly available after a few days.
The thyroglobulin antibodies test is considered a safe procedure. However, as with many medical tests, some problems can occur with having blood drawn, such as:
- fainting or feeling lightheaded
- hematoma (blood accumulating under the skin causing a lump or bruise)
- pain associated with multiple punctures to locate a vein
Helping Your Child
Having a blood test is relatively painless. Still, many children are afraid of needles. Explaining the test in terms your child can understand might help ease some of the fear.
Allow your child to ask the technician any questions he or she might have. Tell your child to try to relax and stay still during the procedure, as tensing muscles and moving can make it harder and more painful to draw blood. It also may help if your child looks away when the needle is being inserted into the skin.
If You Have Questions
If you have questions about the thyroglobulin antibodies test, speak with your doctor.
- Endocrine System
- Blood Test: T3 Total (Triiodothyronine)
- Blood Test: T4 (Thyroxine)
- Blood Test: Thyroid Peroxidase Antibodies
- Blood Test: Thyroid Stimulating Hormone (TSH)
- A to Z: Hashimoto's Thyroiditis |
ListenAndReadAlong: US Citizenship Study Booklet - Q86 to Q100 - EZ Civics Lessons http://youtu.be/jzvMccEmeWk
This is a continuation of the US Citizenship Booklet 2014: Learn about the United States.
The Civics Test.
This rovie concludes American History: Part B: 1800s and start of Part C: 1900s.. and includes
An understanding of America’s geography, symbols, and holidays is important. They provide background and more meaning to historical events and other landmark moments in U.S. history. The following section offers short lessons on our country’s geography, national symbols, and national holidays. The geography of the United States is unusual because of the size of the country and the fact that it is bordered by two oceans that create natural boundaries to the east and west. Through visual symbols such as our flag and the Statue of Liberty, the values and history of the United States are often expressed. Finally, you will also learn about our national holidays and why we celebrate them. Most of our holidays honor people who have contributed to our history and to the development of our nation. By learning this information, you will develop a deeper understanding of
the United States and its geographical boundaries, principles, and freedoms.
Parts A(Geography), B(Symbols) and C(holidays)
This concludes the Civics Portion of the Booklet |
What Is the Flu?
Flu is the common name for influenza. It's caused by a virus that infects the nose, throat, and lungs.
Often, when you're sick with a virus, your body builds a defense system by making antibodies against it. That means you usually don't get that particular type of virus again. Unfortunately, flu viruses mutate (change) each year. So getting sick once doesn't protect you from the flu forever.
Some years, the change in the flu virus is slight. So if you do get the flu, it's mild. The antibodies from having the flu before give you some protection. But other years, the flu virus goes through a major change and many people get very sick.
When Is Flu Season?
Flu viruses usually cause the most illness during the colder months of the year. In the United States, flu season is from October to May. Kids get the flu most often. But people in every age group can catch it.
How Does the Flu Spread?
The flu virus spreads through the air when a person who has the virus sneezes, coughs, or speaks. The flu can sometimes spread through objects that someone with the virus touched, sneezed, or coughed on. When a healthy person touches these contaminated items and then touches their mouth or nose, the virus can enter their body.
People carrying the virus can be contagious from the day before their symptoms start until about a week later. So it's possible to spread the flu before you know you're sick.
Viruses like the flu virus can spread easily in schools. Then, students can bring the virus home to family members and people around them, spreading the illness in their communities.
What Are the Signs & Symptoms of the Flu?
Flu symptoms start about 2 days after a person was exposed to the virus. The main symptoms are:
- sore throat
- a high fever that comes on suddenly
- muscle aches
- stuffy nose
- dry cough
- feeling very tired or weak
- loss of appetite
The fever and aches usually stop in a few days. But the sore throat, cough, stuffy nose, and tiredness may go on for a week or more.
The flu also can cause vomiting, belly pain, and diarrhea. But if you have only vomiting and diarrhea without the other flu symptoms, you probably have gastroenteritis. Gastroenteritis, often called the "stomach flu," isn't the same as influenza. It's usually caused by common viruses that we come into contact with every day.
How Is the Flu Diagnosed?
Based on your symptoms and how you look, your doctor usually can tell if you have the flu. Most people who have it look ill and miserable.
Other infections can cause symptoms similar to the flu. So if a doctor needs to be sure that someone has the flu, they might do a test. They'll take a sample of mucus by wiping a long cotton swab inside the nose or throat. Results might be ready quickly, or can take longer if the test is sent to a lab.
You may feel miserable if you get the flu, but it's unlikely to be serious. It's rare that healthy teens get other problems from the flu. Older adults (over age 65), young kids (under age 5), and people with ongoing medical conditions are more likely to become seriously ill with the flu.
What Should I Do if I Have the Flu?
If you get the flu, the best way to take care of yourself is to rest in bed and drink lots of liquids like water and other non-caffeinated drinks. Stay home from school until you feel better and your temperature has returned to normal.
Most people get better on their own after the virus runs its course. But call your doctor if you have the flu and:
- You're getting worse instead of better.
- You have trouble breathing.
- You have a medical condition (such as diabetes, heart problems, asthma, or other lung problems).
Most teens can take acetaminophen or ibuprofen to help with fever and aches. Don't take aspirin or any products that contain aspirin, though. If kids and teens take aspirin while they have the flu, it puts them at risk for Reye syndrome , which is rare but can be serious.
Antibiotics don't work on viruses, so they won't help someone with the flu get better. Sometimes doctors can prescribe an antiviral medicine to cut down how long a person is ill from the flu. These medicines are effective only against some types of flu virus and work best when taken within 48 hours of when symptoms start. Doctors usually use this medicine for people who are very young, elderly, or at risk for serious problems, like people with asthma.
Can the Flu Be Prevented?
There's no guaranteed way to avoid the flu. But getting the flu vaccine can help. Everyone 6 months of age and older should get it every year.
Flu vaccines are available as a shot or as a nasal spray. Both work equally well. This flu season (2022–2023), get the vaccine your doctor recommends. People with weak immune systems or some health conditions (such as asthma) and pregnant women should not get the nasal spray vaccine.
What else can you do? Wash your hands well and often. Avoid sharing cups, utensils, or towels with others. If you do catch the flu, use tissues whenever you sneeze or cough to avoid spreading the virus.
If you do get the flu this season, take care of yourself and call your doctor with any questions or concerns. When you're feeling bad, remember that the flu usually lasts a week or less and you'll be back to normal before too long. |
What is Hirschsprung’s disease?
Hirschsprung’s disease is a rare congenital condition (present at birth) that causes blockages of the bowel.
Usually, the bowel squeezes and relaxes to push poo along and out of the bottom. In Hirschsprung’s disease, the bowel is not able to relax which means poo cannot pass, causing a blockage.
The reason the bowel does not relax is because certain nerve cells – called ganglion cells – are missing from the lower part of the bowel.
Hirschsprung’s disease is treated with surgery.
About 1 in every 5,000 babies born in the UK are diagnosed with Hirschsprung’s disease.
Boys are 4 times more likely to have Hirschsprung’s disease than girls.
Signs of Hirschprung’s disease
Signs of the condition include:
- not passing meconium in the first 48hrs – meconium is the dark tar-like poo usually passed by babies soon after birth
- vomiting green bile
- a swollen tummy
- in older children, severe constipation
Hirschprung’s disease is when the nerve cells (gangolion cells) are missing. These cells cause the bowel to relax and contract to push poo along and out the bottom. When they are not present the bowel becomes blocked.
Hirschsprung’s disease usually affects only the rectum and sigmoid colon (large bowel).
Sometimes it can affect longer sections of colon. It is rare for it to affect the small bowel.
The cause of Hirschprung’s disease is not known. The condition is not caused by anything the parents have done, or not done, during pregnancy.
It is associated with genetic mutations and it can sometimes run in families.
Most children with Hirschsprung’s disease do not have any other medical conditions and have no family history of the condition.
If your baby has Hirshsprung’s disease you are more likely to have another child with the condition. Our clinical genetics team can give more information about this.
The condition is sometimes associated with other genetic conditions such as Down’s syndrome.
Further information about Down’s syndrome
Down’s Syndrome Association: Homepage
Down’s Syndrome Association: For New Parents
Hirschsprung’s disease is diagnosed by looking at a tissue sample under a microscope to see if the nerve cells (ganglion cells) are present or missing.
In babies, a tiny piece of tissue is taken from the rectum using a suction device called a biopsy gun which is inserted into the bottom.
Performing a rectal biopsy with a biopsy gun in a newborn baby is a painless procedure and does not require an anaesthetic.
In older children, the biopsy is performed in the operating theatre under a general anaesthetic.
A contrast enema is a thick liquid that shows up on X-ray.
The liquid is inserted into your baby’s bottom and means the lower digestive system can be seen on X-ray.
All children with Hirschsprung’s disease will need surgery.
While they wait for surgery, they may need:
- rectal washouts: A thin tube is inserted into your baby’s bottom and warm saltwater is used to wash out trapped poo. This is usually done 1- 2 times a day. Parents can learn how to do this at home
- a temporary stoma: If the child is older or if rectal washouts do not remove enough poo, a temporary stoma can be formed. A stoma is where the intestine is brought to the surface of the tummy and poo collects in a pouch. The stoma is removed after surgery
Babies cared for at Manchester Centre for Neonatal Surgery (MCNS) are offered a ‘pull through’ operation within their first few months of life.
During the operation, the majority of the bowel affected by Hirschsprung’s disease is removed and healthy bowel is brought down to the anus where the bowel is then joined back together.
Children with Hirschsprung’s disease are at risk of enterocolitis.
It is a serious condition that can develop very quickly and can sometimes be life threatening.
The signs of enterocolitis are:
- a swollen tummy
- explosive and smelly poo (sometimes with blood)
- a high temperature
If your baby has these signs they should be taken to hospital urgently.
After a ‘pull through’ operation, children are at risk of:
- incontinence of poo
There are medical treatments available for these problems if they arise and occasionally further operations are needed to help with them.
All children will have regular follow up appointments in our specialist clinics. |
Point of View
Commonly Misused and Misspelled Words
Style chapter overview: Simplicity: Simplicity does not mean writing simple sentences. A series of short simple sentences can sound too simple and unsophisticated in academic writing. Simplicity in writing is trimming the fat which is eliminating the wordiness and saying what you want to say clearly and directly. A reader cannot be convinced of your point if they get lost in the sentences.
Point of View: Point of view refers to the position from which a writer “speaks” to their audience. Writers must be careful and maintain a consistent point of view. Academic writing should primarily rely on third person point of view to appear objective with minimal instances of first person point of view.
Word Choice: You want to choose the best, most effective words to form clear and convincing sentences. So what makes the best word choices? When writing academic essays, you want to use concrete and specific words that directly engage the senses and give precise meaning. Concrete words refer to objects that we can hear, see, feel, touch, and/or smell.
Sentence Crafting: You want to consciously create clear and focused sentences by using energetic verbs (replace the bland verb “to be” when you can), preferring the active voice (rather than passive voice), and choosing clear noun references (don’t use vague pronouns that don’t have a clear referent).
Sentence Combining: Trying to achieve simplicity in your writing does not mean writing only in short sentences. If your essays are filled with short sentences, they will read as choppy and the relationships between the sentences will not be as clear. Combining or joining sentences can convey your ideas more fluidly and logically. However, you also want rhythm in your writing which can be created through varied sentence length and structure. Include short sentences for impact.
Parallelism: Parallelism is giving two or more parts of a sentence a similar form so as to give the passage a definite pattern and to give the ideas the same level of importance and a balance.
Commonly Misused and Misspelled Words: As English teachers who read a lot of essays, we see some words that are regularly used incorrectly, and we see some words that are commonly misspelled. Consult the lists provided to avoid common errors.
“It is not a daily increase, but a daily decrease. Hack away at the inessentials.” ― Bruce Lee
William Zinsser, an expert on writing and author of On Writing Well, said: “The sentence is too simple—there must be something wrong with it. But the secret of good writing is to strip every sentence to its cleanest components. Every word that serves no function, every long word that could be a short word, every adverb that carries the same meaning that’s already in the verb, every passive construction that leaves the reader unsure of who is doing what—these are a thousand and one adulterants that weaken the strength of a sentence.”
Simplicity does not mean writing simple sentences. A series of short simple sentences (He went to the store. The store was far. The day was hot. He was tired.) can sound too simple and unsophisticated in academic writing. You want complexity in your sentences, but that does not mean cramming in smart-sounding words and making long rambling sentences.
Simplicity in writing is trimming the fat which is eliminating the wordiness and saying what you want to say clearly and directly. A reader cannot be convinced of your point if they get lost in the sentences.
WHY IS IT IMPORTANT?
Simplicity in writing is beneficial because…
think that’s what teachers want, then the sentences are not always as easy to follow and can
confuse your reader.
(2) sentences that are clear and easy to follow are then easier for your reader to follow and
eventually be convinced by the points that you are trying to make.
(3) the more that writers can strip down their sentences to the most important parts, they can
better control what they want to say and shape the meaning in the writing they are striving to
Let’s try that again. Simplicity in writing is beneficial because…
(1) direct sentences are clearer.
(2) direct sentences are more convincing.
(3) writers can better control and shape meaning.
HOW DO I DO IT?
Take notice of common expressions that are needlessly wordy and trim them:
A common violation of conciseness is the presentation of a single complex idea, step by step, in a series of sentences which might better be combined into one:
Macbeth was very ambitious. This led him to wish to become king of Scotland. The witches told him that this wish of his would come true. The king of Scotland at this time was Duncan. Encouraged by his wife, Macbeth murdered Duncan. He was thus enabled to succeed Duncan as king. (55 words)
Encouraged by his wife, Macbeth achieved his ambition and realized the prediction of the witches by murdering Duncan and becoming king of Scotland in his place.
The active voice is more concise and vigorous than the passive.
The large chunks of debris covering the roof and clogging the drainpipes were removed by city workers.
City workers removed the large chucks of debris covering the roof and clogging the drainpipes.
The active voice can also strengthen bland expressions and wordy phrasing:
There were a great number of dead leaves lying on the ground.
Dead leaves covered the ground.
The reason that he left college was that his health became impaired.
Failing health compelled him to leave college.
It was not long before he was very sorry that he had said what he had.
He soon repented his words.
Revise the following passages, avoiding wordiness and undesirable repetition.
A large number of people enjoy reading murder mysteries regularly. As a rule, these people are not themselves murderers, nor would these people really ever enjoy seeing someone commit an actual murder, nor would most of them actually enjoy trying to solve an actual murder. They probably enjoy reading murder mysteries because of this reason: they have found a way to escape from the monotonous, boring routine of dull everyday existence.
To such people the murder mystery is realistic fantasy. It is realistic because the people in the murder mystery are as a general rule believable as people. They are not just made up pasteboard figures. It is also realistic because the character who is the hero, the character who solves the murder mystery, solves it not usually by trial and error and haphazard methods but by exercising a high degree of logic and reason. It is absolutely and totally essential that people who enjoy murder mysteries have an admiration for the human faculty of logic.
But murder mysteries are also fantasies. The people who read such books of fiction play a game. It is a game in which they suspend certain human emotions. One of these human emotions that they suspend is pity. If the reader stops to feel pity and sympathy for each and every victim that is killed or if the reader stops to feel terrible horror that such a thing could happen in our world of today, that person will never enjoy reading murder mysteries. The devoted reader of murder mysteries keeps uppermost in mind at all times the goal of arriving through logic and observation at the final solution to the mystery offered in the book. It is a game with life and death. Whodunits hopefully help the reader to hide from the hideous horrors of actual life and death in the real world.
WHAT IS POINT OF VIEW?
Point of view refers to the position from which writers “speak” to their audience. Writers have a point of view in all types of writing (and speaking), including emails, text messages, essays, articles, stories, etc.
Writers have three different options for point of view:
First person point of view makes direct references to the writer using the following pronouns: I, me, my, myself, mine, we, us, our, and ourselves.
Second person point of view makes direct references to the reader using the following pronouns: you, your, yourself, and yourselves.
Third person point of view directly states who or what the writing discusses without using first person pronouns; third person point of view uses the following pronouns: he, she, it, they, him, her, his, hers, its, itself, them, their, themselves, one, etc.
WHY IS IT IMPORTANT?
Although creative writing gives writers more flexibility with the point of view, academic essays typically use third person point of view (with minimal uses of first person point of view) because third person enhances credibility by appearing objective and also emphasizes the topic instead of the writer.
Here’s a guide for when you use which point of view and why:
First person point of view allows writers to write about themselves when including specific personal examples (“The author’s criticisms are accurate which I know from having also served in the army as a young woman”). In some projects, first person point of view can be used to show how a writer’s research or ideas build on or depart from the work of others.
Second person point of viewallows the writer to speak directly to the reader so is helpful in “how to” instruction (like in this Rhetoric); however, this is not commonly used in academic writing because it can include your readers in beliefs they may not share (“When you listen to the president, you wonder how he got elected.”). Using “you” can also be imprecise (“You can drive around for hours looking for parking.” This is not true for all. This is more precise: “San Franciscans can drive around for hours looking for parking.”). Using “you” is also more informal and conversational. For these reasons second person is not commonly used in academic writing.
Third person point of view allows the writer to appear objective and should be the primary point of view for academic essays and other formal types of communication.
HOW DO I USE IT?
As you write your essays, you will need to carefully consider how you use point of view so that your writing has a consistent voice throughout the essay. Let’s look at some basics on using point of view.
1. Consistent Point of View—Writers must be careful and maintain a consistent point of view; as noted above, academic writing should primarily rely on third person point of view with minimal instances of first person point of view. When writers switch the point of view within a sentence, the sentences may be confusing.
ORIGINAL: Students should make sure they register early for the Rock the School Bells conference since he will not have a chance to get tickets the day of the conference.
REVISED: Students should make sure they register early for the Rock the School Bells conference since they will not have a chance to get tickets the day of the conference.
Another consideration for a consistent point of view relates to using plural nouns and pronouns instead of the singular forms; this approach helps writers be more concise and avoid the unnecessary use of “he/she” and “him/her.” While “he/she” and “him/her” may be grammatically correct, you can achieve a stronger voice and better style by minimizing the use of these phrases.
ORIGINAL: A student should make sure he/she signs up early for the workshops he/she wants to attend for his/her classes.
REVISED: Students should make sure they sign up early for the workshops they want to attend for their classes.
2. Personal examples—When you include personal examples or experiences to illustrate a point in an academic essay, you should not refer to yourself in the third person. On the contrary, you should definitely use first person point of view to avoid accidental changes in point of view as well as to avoid awkward references to yourself.
ORIGINAL: Last year, Rachel Everett attended the Rock the School Bells conference, and I learned the history of hip hop. (NOTE: the writer, Rachel Everett, first refers to herself in the third person and switches to first person in the second half of the sentence)
REVISED: Last year, I attended the Rock the School Bells conference, and I learned the history of hip hop.
3. Unnecessary use of first person—When writing academic essays, you will often need to make an argument, which requires you to state your opinion on the topic and sources. You do not need to use phrases like “I think/feel/believe” or “in my opinion.” If you have written a grammatically correct sentence, you will be able to simply delete these phrases (and still state your opinion).
ORIGINAL: I think Dyson misses the point when he argues that older generations do not appreciate hip hop because to me many parents and grandparents do appreciate hip hop.
REVISED: Dyson misses the point when he argues that older generations do not appreciate hip hop because many parents and grandparents do appreciate hip hop.
Point of View
Revise the following sentences to make the point of view consistent.
1. A student should seek help from counselors to make sure they have student educational
2. Professor Garcia’s classes teach students critical thinking while it also helps them improve
3. A new student must work hard to learn about the college resources he or she may need as
they begin their college careers.
4. If you want more active participation in class, teachers will appeal to different learning
Revise the following sentences to remove the unnecessary use of first person.
5. Skyline College has great programs to help students get a good education, so I think local
high school students should seriously consider starting their education here.
6. In my opinion, California should provide more funding to community college students
because I believe education should be a top priority for the government. |
Some colours come from the properties of individual molecules, some colours come from the shape of things. This is a post about the colour from the shape of things – structural colour, like that found in the Morpho rhetenor butterfly pictured on the right.
To understand how this works, we first need to know that light is a special sort of wave known as electromagnetic radiation, and that these waves are scattered by small structures.
For the purposes of this post the most important property of a wave is it’s wavelength, it’s “size”. The wavelengths of visible light fall roughly in the range 1/1000 of a millimetre to 1/2000 of a millimetre. (1/1000 of a millimetre is a micron). Blue light has a shorter wavelength than red light.
Things have colour either because they generate light or because of the way they interact with light that falls upon them. The light we see is made of many different wavelengths, the visible spectrum. Each wavelength has a colour, and the colour we perceive is a result of adding all of these colours together. Our eyes only have three different colour detectors, so in the eye a multiplicity of wavelengths is converted to just three signals which we interpret as colour. The three colour detectors are why we can get a full colour image from a TV with just three colours (red, green and blue) mixed together. Some other animals have more colour sensors, so they see things differently.
The problem with viewing the small structures that lead to the blue colour of the butterfly wings is that they have interesting features of a size about the same as the wavelength of light, and that means you can’t really tell much by looking at them under a light microscope. They come out blurry because they’re at the resolution limit. So you resort to an electron microscope, electrons act as a wave with a short wavelength so you can use an electron microscope to look at small things in much the same way as you would use a light microscope except the wavelength of the electrons is smaller than that of light so you can look at smaller things.
So how to explain resolution (how small a thing you can see) in microscopy. I would like to introduce you to a fresh analogy in this area. Summon up in your mind, a goat (tethered and compliant), a beachball (in your hands), and a ping-pong ball (perhaps in a pocket). Your task is to explore the shape of the goat, by touch, via the beachball, so proceed to press your beachball against the goat. The beachball is pretty big, so you’re going to get a pretty poor tactile picture of the goat. It’s probably going to have a head and a body but the legs will be tricky. You might be able to tell the goat has legs, but you’re going to struggle to make out the two front legs and the two back legs separately. Now discard the beachball and repeat the process with the ping-pong ball. Your tactile picture of the goat should now become much clearer. The beachball represents the longer wavelength of light, the ping-pong ball the shorter wavelengths of electrons in an electron microscope.
And now for scattering; retrieve your beachball; step back from the goat. You are now going to repeatedly throw beachball and ping-pong ball at the goat and examine where the balls end up having struck the goat. This is a scattering experiment. You can see that how the ball bounces off the goat will depend on the size of the ball, and obviously the shape of the goat. This isn’t a great analogy, but it gives you some idea that the shape of the goat can lead to different wavelengths being scattered in different ways.
So returning to the butterfly at the top of the page, the iridescent blueness doesn’t come from special blue molecules but from subtle structures on the surface of the wings. These are pictured below, because these features are smaller than the wavelength of light we need to take the image using an electron microscope (we are in ping-pong ball mode). The structures on the surface of the butterfly’s wing look like tiny Christmas trees.
These structures reflect blue light really well, because of their shape, but not other colours – so the butterfly comes out blue.
Another example of special structures that interact with light is this is a *very* white beetle:
It turns out that the details of the distribution of the scale material (keratin) and air in the scale conspire to make the scale highly reflective. Making things white is something important to a number of industries, for example those that make paint or paper. If we can work out how the beetle does this trick then we can make cheaper, thinner, better white coatings.
Finally, this is something a little different. If you’ve got eyes, then you want to get as much light into them as possible. The problem is that some light gets reflected from the surface of an object, even if it is transparent – think of the reflection of light from the front surface of a clear glass window. These structures:
known an “anti-reflective nipple array”, are found on the surface of butterfly eyes. The nipples stop the light being reflected from the surface of the eye, allowing it instead to enter the eye. Similar structures are found on the surface of transparent butterfly wings.
In these cases animals have evolved structures to achieve a colour effect, but more widely we see structural colours in other places like rainbows, opal, oil films and CDs. The sky is blue for a related reason…
The work on butterflies and beetles was done by a team led by Peter Vukusic at Exeter University:
- Vukusic P, Sambles JR, Photonic structures in biology, Nature 424, (2003), 852-855. Lots more examples in here, caption to figure 7: Anti-reflective nipple arrays.
- Hallam BT, Hiorns AG, Vukusic P, Developing optical efficiency through optimized coating structure: biomimetic inspiration from white beetles, Applied Optics 48, (2009), 3243. |
Literary Definition of Narrative Techniques
Narrative techniques are the methods and devices writers use to tell stories, whether in works of literature, film, theater or even oral stories. Many techniques work upon specific uses of phrases, punctuation or exaggerations of description, but nearly every storyteller, regardless of genre or style employs a few foundational techniques.
The Writer's Lens
Perspective is the lens through which a story is told. A story told in the first-person point-of-view is one in which the narrator is the main character, and uses the pronoun "I," when describing himself. For example, Holden Caulfield is the first-person narrator of "The Catcher in the Rye." This type of perspective is intimate, but can limits the perspective of the reader and relies upon a narrator who may not be reliable. By contrast, stories in which the narrator simply tells the tale of other characters, and never uses "I" is called third-person narration. This narrator can be omniscient, or limited to the awareness of the thoughts of a single character.
Now, or In the Past?
The choice of tense also impacts the effect of a story. Traditionally, narratives are told in the past tense. The present tense is often employed by short story writers. The desired effect of present tense is that the reader experiences the events of the story as they are happening, and so the story conveys a sense of immediacy. Present tense, however, is often criticized for being a gimmick when used for the entire length of a narrative, as opposed to being used briefly for a moment of added expression. The tense employed by the writer also enables time shifts, or flashbacks, which is a narrative technique that allows the writer to tell the story and build tension through foreshadowing or memory. "The Time Traveler's Wife" by Audrey Niffenegger, for example, uses time shifting extensively to unfold the love story central to the narrative.
Similes and metaphors are foundational devices for storytelling because they allow writers to describe the essence of a character or action both economically and profoundly. Similes and metaphors, however, have a more subtle impact than simply enhancing the description in a piece of writing. They also reveal aspects of the character who makes the comparison. For example, if a first-person narrator regularly compares people to mechanical devices and machines, it might indicate the narrator is detached from emotion or unable to fully understand and empathize with the emotions the narrator observes.
Evoking the Imagination
The goal of telling a story -- in its simplest form -- is to transmit narrative events from your imagination into the imagination of the reader. Images are powerful tools for accomplishing this task, and so imagery is a fundamental tool that writers use. Vividly describing how characters and places look helps the reader to visualize your story, and have an experience that is more immersed in the scenes and world of the narrative.
While dialogue isn't necessary in a story, it rounds out the basic narrative techniques that all writers use, and is the only one that allows the reader to experience multiple characters' words without the filter of the narrator. Dialogue is when the narrator directly provides a conversation between characters. These exchanges are usually contained within quotation marks to make them easy to identify, but some writers make a stylistic choice not to use them. Notwithstanding, dialogue is a useful tool for allowing character's to reveal qualities about themselves. Passages of dialogue also break up the monotony of prose narration and introduce the unique voices of the characters into the story (See References 7).
- The Catcher in the Rye; J.D. Salinger
- University of Washington: Tenses in Writing
Christopher Cascio is a memoirist and holds a Master of Fine Arts in creative writing and literature from Southampton Arts at Stony Brook Southampton, and a Bachelor of Arts in English with an emphasis in the rhetoric of fiction from Pennsylvania State University. His literary work has appeared in "The Southampton Review," "Feathertale," "Kalliope" and "The Rose and Thorn Journal." |
Autism & Behavioral Services
Speech therapy is the assessment and treatment of communication problems and speech disorders. It is performed by speech-language pathologists, which are often referred to as speech therapists. Speech therapy techniques are used to improve communication. These include articulation therapy, language intervention activities, and others depending on the type of speech.
Writing and Thinking About Theraphy
Speech therapy may be needed for Speech Disorders that develop in childhood or Speech Impairments in adults caused by an injury or illness, such as Stroke or Brain Injury. These are problems with making sounds in syllables, or saying words incorrectly to the point that listeners.
Receptive Disorders are problems with understanding or processing language problems with putting words together, having a limited vocabulary, or being unable to use language. |
Developing robust literacy skills empowers children to succeed in school and life. In Cobb County School District, we strive to ensure every student is reading on or above grade level by grade 3.
The Cobb County Teaching and Learning Standards in English Language Arts provide a rigorous set of proficiencies in reading, writing, listening, speaking, and language. As part of our district’s systematic and balanced approach to literacy, K-5 students are learning to become fluent and proficient readers and receive explicit instruction in phonics, spelling, and vocabulary.
In some schools, however, our data and test scores indicated that students were struggling. So, in 2016, we launched an Early Literacy Initiative to add another layer to our approach and help K-2 students develop a strong literacy foundation.
Addressing early literacy learning
The Early Literacy Initiative addresses early literacy learning through the application of consistent high-quality instruction, targeted interventions, timely data evaluation, and collaboration between school and district leaders. It is built upon a strategy of two-week, structured, direct instructional cycles followed by individual common formative assessment.
- 5 of the biggest education trends in 2023 - January 30, 2023
- How to help ESL students improve writing skills - January 30, 2023
- Join the revolution: The 4th Industrial Revolution is changing learning - January 27, 2023 |
Impact on Children
How does family violence affect Children?
Children who witness abuse display the same emotional responses as children who have been physically and emotionally abused.
The family is the place where a child learns about the world. Living in a family where parents are physically or verbally abusive to each other, a child learns that:
- The world is an unstable and insecure place
- Parents are unpredictable in their roles as caregivers and partners
- Violence is the best way to solve problems
- I have to be in control to be OK
- It is my fault that my parents fight
- People sometimes deserve to be hit
- Love is painful
The violent family setting demonstrates the following characteristics:
- Modeling of aggression
- Disrespect towards women/Boys must be powerful and controlling
- Limited modeling of positive coping strategies (family problem-solving, decision making)
- Disturbed family system, i.e. Blurred boundaries between children and parents
- Maladaptive alliances, i.e. Children and Father ganging up on Mom
- Negative communications, lack of effective communication skills
- Inconsistent and /or inappropriate discipline, mixed messages
In reaction to crisis in the violent family setting, children may have difficulty:
- Reacting to actual or perceived parental separation, exhibited by denial, anger, depression.
- Coping with stressors in the family, i.e. financial problems, unemployment, Mom’s emotional state, police involvement, daily routine, reactions from other family members.
- Expressing their feelings; due to the secrecy and closed family system children are not allowed to talk about the violence outside of the family.
Common problems that children growing up in violent families exhibit:
- Low self-esteem
- Negative attention-getting behaviors
- Aggressive, explosive, impulsive
- Passivity; withdrawn
- Controlling behaviors
- Limited pro-social skills
- “Little adult”- Has difficulty having fun; takes on adult responsibilities, inhibited.
- Regression to more immature behaviors
- “Discipline problems”
- School avoidance
- General anxiety, depression
- Premature involvement in sexual relations
- Afraid to make mistakes; perfectionistic or afraid to try
- Afraid to be children, i.e. make noise, have fun, be free
Children growing up in violent families have a heightened need for:
- Nurturance, support, and reassurance
- Predictability in relationships
- Consistency in parenting and daily routines
- Control in relationships
- Sensitivity to and constructive outlets for their feelings
Here are some interesting facts about children and domestic violence:
- Over half of female domestic violence victims live in households with children under the age of 12.
- Research indicates that up to 90 percent of children living in homes where there is domestic violence know what is going on.
- In a study of more than 6,000 families in the United States, it was reported that half of the men who physically abused their wives also abused their children. Also, older children are frequently assaulted when they interfere to defend or protect the victim.
- A child’s exposure to domestic violence is the strongest risk factor for transmitting violent behavior from one generation to the next.
- Childhood abuse and trauma has a high correlation to both emotional and physical problems in adulthood, including tobacco use, substance abuse, obesity, cancer, heart disease, depression and a higher risk for unintended pregnancy.
What can you do?
- If your child is witnessing abuse in your home, what you’re experiencing is likely made even worse by the worry and concern you feel for your child. It’s important to remember that both you and your children’s needs are important.
- Children may respond differently to witnessing abuse. They may withdraw, or they might act out. They might pretend it’s no big deal, or they may quickly show signs of trauma, such as anxiety, sleep disruption or problems in school. Children can experience a range of emotions when living in abusive households including fear, anger, isolation and guilt. They may even feel conflicted about loving their abusive parent. These are very normal feelings, and it’s important that they be validated.
- It’s normal for people who have been in a violent relationship to NOT want to talk to their kids about it. It might seem safer to pretend that the abuse didn’t happen, assume that the kids don’t know about it or hope they will just forget about it. But, denying or ignoring abuse can actually create more confusion and fear, so it’s important to talk to your children about what’s going on whenever possible.
- Have conversations. Let children know that it’s okay to talk about what has happened. Stress that abuse is wrong, but avoid criticizing the abuser if they are a parent or parent-figure to the child.
- Remind your kids that the abuse is never their fault. Make sure that they know that you care about them. Children are extremely resilient, and while the impact of abuse can be long lasting, knowing that they have someone to depend on that loves them will help them heal.
- Above all, proceed with caution and listen to your instincts. Tap into what you feel is best for both you and your child. There are often pros and cons of either staying with or leaving an abusive partner. It can be a dangerous situation either way. If you do decide to leave your relationship, consider when and how to best leave. Allow children to be open about their feelings in the process, and devise a safety plan (whether staying or leaving). |
In the periodic table every element can be termed either a metal and non-metal. Test your skills in MCQ quiz on Metals and Non-Metals. Here, we have listed practice metal and non-metal multiple choice questions and objective question and answers in science for test and various entrance exam preparations. Professionals, Teachers, Students and Kids Trivia Quizzes to test your knowledge on the subject.
1. Which of the following can be beaten into thin sheets?
2. Which of the following can be beaten into thin sheets?
3. Which of the following statements is correct?
4. Which of the following methods is suitable for preventing an iron frying pan from rusting?
5. An element reacts with oxygen to give a compound with a high melting point. This compound is also soluble in water. The element is likely to be
6. Food cans are coated with tin and not with zinc because?
7. Metals generally have _____ number of electrons in their valence shell.
8. Nonmetals contain ______ number of electrons in their outmost shell.
9. Non metals form
10. To become stable, metals
MCQ Multiple Choice Questions and Answers on Metals and Non-metals
Metals and Non-metals Trivia Questions and Answers PDF Online
Metals and Non-metals Question and Answer
USA - United States of America Canada United Kingdom Australia New Zealand South America Brazil Portugal Netherland South Africa Ethiopia Zambia Singapore Malaysia India China UAE - Saudi Arabia Qatar Oman Kuwait Bahrain Dubai Israil England Scotland Norway Ireland Denmark France Spain Poland and many more.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.