content
stringlengths 275
370k
|
---|
The village stood on a plateau that stretched out in an east−west direction. A secondary road linked it to a highway that ran between Bayt Jibrin (an important village in the Hebron sub-disctrict) and the Jerusalem−Jaffa highway. A number of paths also connected it with the area's other villages. Sufla contained archaeological evidence of a Crusader presence. In the late nineteenth century, Sufla was situated on a narrow ridge and had a spring to the southeast. The village was classified as a hamlet in the Mandate-era Palestine Index Gazetteer, and was rectangular in shape. Most of its houses were built of stone, and some of them had a cave-like design. When new houses were constructed, the village expanded along a road that led to the nearby village of Jarash. The people of Sufla were Muslims and maintained a shrine for a local shaykh (a Shaykh Mu'annis) on the western side of the village. They obtained water for domestic use from two springs in the southeast and northeast. Their lands were watered by rainfall, and were planted in grain, fruit trees, olive trees, and grapes; olive trees covered 24 dunums. In 1944/45 a total of 400 dunums was allocated to cereals. Parts of the village lands were used as grazing areas.
In the second half of October 1948, Israeli forces launched Operation Ha-Har to capture a string of villages on the southern front (see 'Allar, Jerusalem sub-disctrict). Sufla was one of the villages captured in the beginning of the operation by the Sixth Battalion of the Har'el Brigade, which seized the village during either the night of 18−19 October or the following night. According to Israeli historian Benny Morris, 'Most of the population [of these villages fled southwards, towards Bethlehem and the Hebron hills.' Morris also cites evidence of expulsions at some villages in the area.
There are no Israeli settlements on village lands.
Stone rubble from houses is scattered throughout the site, which has become an open grazing area. Cave-like structures, formerly used as dwellings, also are present, and cactuses grow among the ruins and rubble. The village cemetery lies to the east of the site, and almond and olive groves cover the areas to the west and north. |
In the natural world, small animals frequently latch onto larger beings and “hitch a ride” to conserve energy while traversing great distances.
A study recently published in the journal Current Biology reveals that minuscule Caenorhabditis elegans worms have the capacity to utilize electric fields to “leap” across Petri dishes or onto insects. This capability enables them to glide in the air and attach themselves, for example, onto naturally charged bumblebee chauffeurs.
“Pollinators, such as insects and hummingbirds, are known to be electrically charged, and it is believed that pollen is attracted by the electric field formed by the pollinator and the plant,” says Takuma Sugi, a biophysics professor at Hiroshima University and co-senior author on the study. “However, it was not completely clear whether electric fields are utilized for interactions between different terrestrial animals.”
A worm jumps onto a bumblebee along an electrical field. Credit: Current Biology/Chiba et al.
The researchers first began investigating this project when they noticed that the worms they cultivated often ended up on the lids of Petri dishes, opposite to the agar they were placed on. When the team attached a camera to observe this behavior, they found that it was not just because worms were climbing up the walls of the dish. Instead, they were leaping from the floor of the plate to the ceiling.
Suspecting travel by electric field, the researchers placed worms on a glass electrode and found that they only leaped to another electrode once charge was applied. Worms jumped at an average speed of .86 meters per second (close to a human’s walking speed), which increased with electric field intensity.
Next, the researchers rubbed flower pollen on a bumblebee so that it could exhibit a natural electric charge. Once close to these bees, worms stood on their tails, then jumped aboard. Some worms even piled on top of each other and jumped in a single column, transferring 80 worms at once across the gap.
A cluster of worms leap together. Credit: Current Biology/Chiba et al.
“Worms stand on their tail to reduce the surface energy between their body and the substrate, thus making it easier for themselves to attach to other passing objects,” Sugi says. “In a column, one worm lifts multiple worms, and this worm takes off to transfer across the electric field while carrying all the column worms.”
C. elegans is known to attach to bugs and snails for a ride, but because these animals don’t carry electric fields well, they must make direct contact to do so. C. elegans is also known to jump on winged insects, but it was not clear how the worms were traversing such a significant distance for their microscopic size.
This research makes the connection that winged insects naturally accumulate charge as they fly, producing an electric field that C. elegans can travel along.
It’s unclear exactly how C. elegans performs this behavior. The worms’ genetics might play a role. Researchers observed jumping in other worm species closely related to C. elegans, and they noted that mutants who are unable to sense electric fields jump less than their normal counterparts.
However, more work is needed to determine exactly what genes are involved in making these jumps and whether other microorganisms can use electricity to jump as well.
Reference: “Caenorhabditis elegans transfers across a gap under an electric field as dispersal behavior” by Takuya Chiba, Etsuko Okumura, Yukinori Nishigami, Toshiyuki Nakagaki, Takuma Sugi and Katsuhiko Sato, 21 June 2023, Current Biology.
The study was funded by the Office for the Promotion of Nanotechnology Collaborative Research, the Japan Science Society, the Consortium Office for the Fostering of Researchers in Future Generations, Hokkaido University, the JSPS Core-to-Core Program, the Research Program of Five-star Alliance in NJRC Mater. & Dev, the Japan Society for the Promotion of Sciences, and the Japan Agency for Medical Research and Development. |
Quick summary: In this lesson, students will understand the steps needed to undertake a bush blitz in their schoolyard to find what animals live there. Students will work in small groups to observe, record and report back on what they find in the microhabitats in their schoolyard.
Although this lesson can be taught by itself, it also forms the fourth lesson in a unit of eight lessons that can be delivered in sequence to take your students through a complete backyard sustainability project.
- Students understand why a bush blitz is important
- Students understand the role of teamwork in collecting animals
- Students understand the role of entomologists.
21st century skills:
Australian Curriculum Mapping
Year 5 English
- Plan, rehearse and deliver presentations for defined audiences and purposes incorporating accurate and sequenced content and multimodal elements (ACELY1700)
- Clarify understanding of content as it unfolds in formal and informal situations, connecting ideas to students’ own experiences and present and justify a point of view (ACELY1699)
Year 5 Science
- Living things have structural features and adaptations that help them to survive in their environment (ACSSU043)
- Identify, plan and apply the elements of scientific investigations to answer questions and solve problems using equipment and materials safely and identifying potential risks (ACSIS086)
- Decide variables to be changed and measured in fair tests, and observe measure and record data with accuracy using digital technologies as appropriate (ACSIS087)
Year 6 English
- Participate in and contribute to discussions, clarifying and interrogating ideas, developing and supporting arguments, sharing and evaluating information, experiences and opinions (ACELY1709)
- Make connections between students’ own experiences and those of characters and events represented in texts drawn from different historical, social and cultural contexts (ACELT1613)
Year 6 Science
- The growth and survival of living things are affected by physical conditions of their environment (ACSSU094)
- Identify, plan and apply the elements of scientific investigations to answer questions and solve problems using equipment and materials safely and identifying potential risks (ACSIS103)
- Decide variables to be changed and measured in fair tests, and observe measure and record data with accuracy using digital technologies as appropriate (ACSIS104)
Syllabus outcomes: EN3-1A, EN3-8D, ST3-10LW, ST3-4WS, ST3-11LW.
General capabilities: Literacy, Critical and Creative Thinking, Personal and Social Capability.
Cross-curriculum priority: Sustainability OI.2.
Relevant parts of Year 5 English achievement standards: Students contribute actively to class and group discussions, taking into account other perspectives.
Relevant parts of Year 5 Science achievement standards: Students analyse how the form of living things enables them to function in their environments. Students predict the effect of changing variables when planning an investigation and communicate their ideas and findings using multimodal texts.
Relevant parts of Year 6 English achievement standards: Students contribute actively to class and group discussions, using a variety of strategies for effect.
Relevant parts of Year 6 Science achievement standards: Students describe and predict the effect of environmental changes on individual living things. Students design investigations into simple cause-and-effect relationships, and construct multimodal texts to communicate ideas, methods and findings.
Topic: Biodiversity, Sustainability.
This lesson is part of the wider unit of work Backyard Bush Blitz – Years 5 & 6.
Time required: Minimum 3 hours.
Level of teacher scaffolding: Low – minimal scaffolding if the class has completed the previous lessons. High if teachers have not completed the previous lessons.
- A2 birds eye maps of school grounds
- Beat sheets
- Berlese funnel traps
- Recorder Worksheet
- Reporter Worksheet
- Insect nets
- iPad for each group (optional)
- Magnifying box
- Magnifying glass
- Pencils (coloured and writing)
- Pitfall traps
- Ruler (for each group)
- String (for each group).
- Student Worksheet
Related professional development: Teach Science Inquiry in the Primary Classroom.
Keywords: Mapping, jobs, traps, insects, teamwork, schoolyard, habitats, spiders, observations, collections, ecology, leave no trace, identification, I-Naturalist.
Bush Blitz is Australia’s largest nature discovery program, with the Bush Blitz TeachLive component delivered by Earthwatch Australia, who kindly provided the images in these lessons. Thank you to the Ian Potter Foundation, John T Reid Charitable Trusts and The Myer Foundation for generously supporting the development of these lessons.
Cool Australia’s curriculum team continually reviews and refines our resources to be in line with changes to the Australian Curriculum. |
Soil remediation is a critical environmental practice aimed at restoring or improving the quality of soil that has been contaminated or degraded by various pollutants, such as heavy metals, pesticides, petroleum products and industrial chemicals. The importance of soil remediation cannot be overstated due to its numerous ecological, agricultural, and human health benefits. Mycelium substrates, specifically mycoremediation, have emerged as a promising and sustainable approach to assist in soil remediation.
Here are some key points on the importance of soil remediation and how mycelium substrates can help:
- Environmental Protection: Contaminated soil can have severe adverse effects on the environment. It can lead to soil erosion, groundwater pollution, and harm to local ecosystems. Soil remediation helps mitigate these negative impacts, contributing to overall environmental protection and conservation efforts.
- Agricultural Productivity: Healthy soil is essential for agriculture, as it provides the necessary nutrients and support for plant growth. Soil contamination can lead to reduced crop yields and food safety concerns. Remediated soil can restore fertile ground for farming, ensuring food security and quality.
- Human Health: Contaminated soil can pose serious health risks to humans, especially if the contaminants leach into the water supply or are taken up by plants in the food chain. Soil remediation helps safeguard public health by reducing exposure to harmful substances.
- Biodiversity: Many soil-dwelling organisms, including microorganisms, insects, and plants, depend on a healthy soil environment. Soil remediation efforts aim to protect and restore these ecosystems, supporting biodiversity and ecological balance.
- Land Reclamation: Remediated soil can be repurposed for various land uses, including residential, commercial, and recreational purposes. This repurposing of land can revitalize urban areas and promote sustainable development.
Now, let’s explore how mycelium substrates plays a role in soil remediation, which FarmBox Foods customer BLH Farm has been doing since acquiring a Gourmet Mushroom Farm:
Mycoremediation: Mycoremediation is a bioremediation technique that employs fungal mycelium, the thread-like vegetative part of fungi, to break down or absorb contaminants in the soil. Mycelium has several properties that make it effective in soil remediation:
- Biodegradation: Mycelium can secrete enzymes that break down complex organic molecules, making them more easily metabolized by other microorganisms and reducing the toxicity of contaminants.
- Metal Accumulation: Some species of fungi have the ability to accumulate heavy metals in their mycelium. This can help to immobilize or concentrate metals, preventing them from leaching into groundwater or affecting plant growth.
- Soil Structure Improvement: Mycelium can also improve soil structure by binding soil particles together, increasing soil porosity, and enhancing water retention.
- Carbon Sequestration: As fungi grow and decompose organic matter, they contribute to carbon sequestration, which can help mitigate climate change.
- Low Environmental Impact: Mycoremediation is often considered an environmentally friendly approach because it typically requires minimal external inputs and doesn’t produce harmful byproducts.
While mycelium substrates offer promising solutions for soil remediation, it’s essential to note that their effectiveness depends on various factors, including the type and extent of contamination, the specific fungi species used, and environmental conditions. That being said, mycoremediation is often used in combination with other remediation techniques to achieve optimal results. Additionally, research and development in this field continue to expand our understanding of how fungi can be harnessed for sustainable soil remediation practices. |
Human evolutions have taken us through almost every part of the world. It is reasonable to think that as modern humans were evolving, the urge to explore and expand was also taking hold of us as a species. This is evident by the fact that these are the defining traits of our species today. Starting from the African plains, our ancestors have had to cover large distances to be able to settle down in the places that they have. This means that the signs of early humans can be found almost everywhere.
In a discovery in northern Saudi Arabia, researchers think that a band of Homo Sapiens stopped to drink at a lake. This ancient lake must have been a favourite place for the local animals, as the humans seem to have hunted some large mammals in the area. All this took place around 120,000 years ago. All the details provided here were reconstructed by the researchers who discovered ancient human and animal footprints. The area is now called Nefud Desert, and it might provide us with a big clue as to what routes our ancestors took out of Africa.
The life-less and unforgiving conditions of the Arabian Peninsula were not always like this. Researchers think that in a period known as the last interglacial, the area would have experienced much greener and humid conditions. The Peninsula of today makes it impossible to think of a time when it would have been bustling with life. The deserts that we see today were grasslands back then, and the Peninsula must have had freshwater lakes and rivers. The initial discovery of the footprints was made in 2017.
The footprints were dated using a technique known as optically stimulated luminescence. The method involves exposing quartz grains to light and measuring how much energy they emit. The researchers have found hundreds of footprints in the area and identified seven of them from humans. Four out of the seven prints have a certain orientation that indicates that it was a band of humans who were travelling together. The lack of stone tools in the area indicates that there was no permanent settlement there. The purpose of these visits is thought to be for the collection of food and water.
The researchers also discovered some 233 fossils in the area. This indicates a large presence of animals, both herbivore and carnivore, in the area. The large animals that came to the area might have been a good source of food for the carnivores, which might have made this lake a very busy place. The discovery of these prints just goes to show that human evolution was a continuous process and that we might keep finding such intermediate proofs as time goes on.
- Physicists Have Discovered a New Magnetoelectric Effect
- Potential Treatment for COVID-19 Targeting the Nose |
After the Civil War, much of the South lay in ruins. How would these states be brought back into the Union? Would they be conquered territories or equal states? How would they rebuild their governments, economies, and social systems? What rights did freedom confer upon formerly enslaved people? The answers to many of Reconstruction’s questions hinged upon the concepts of citizenship and equality. The era witnessed perhaps the most open and widespread discussions of citizenship since the nation’s founding. It was a moment of revolutionary possibility and violent backlash. African Americans and Radical Republicans pushed the nation to finally realize the Declaration of Independence’s promises that “all men were created equal” and had “certain, unalienable rights.” Conservative white Democrats granted African Americans legal freedom but little more. When Black Americans and their radical allies succeeded in securing citizenship for freedpeople, a new fight commenced to determine the legal, political, and social implications of American citizenship. Resistance continued, and Reconstruction eventually collapsed. In the South, limits on human freedom endured and would stand for nearly a century more. These sources gesture toward both the successes and failures of Reconstruction.
Reconstruction began before the War ended. After his famous March to the Sea in January of 1865, General William T. Sherman and Secretary of War Edwin Stanton met with twenty of Savannah’s African American religious leaders to discuss the future of the freedmen of the state of Georgia. In the excerpt below, Garrison Frazier, the chosen spokesman for the group, explains the importance of land for freedom. The result of this meeting was Sherman’s famous Field Order 15, which set aside confiscated plantation lands along the coast from Charleston, S.C. to Jacksonville, FL. for Black land ownership. The policy would later be overruled and freedpeople would lose their right to the land.
Black Americans hoped that the end of the Civil War would create an entirely new world, while white southerners tried to restore the antebellum order as much as they could. Most former enslavers sought to maintain control over their laborers through sharecropping contracts. P.H. Anderson of Tennessee was one such former enslaver. After the war, he contacted his former enslaved laborer Jourdon Anderson, offering him a job opportunity. The following is Jourdon Anderson’s reply.
Charlotte Forten was born into a wealthy Black family in Philadelphia. After receiving an education in Salem, Massachusetts, Forten became the first Black American hired to teach white students. She lent her educational expertise to the war effort by relocating to South Carolina in 1862 with the goal of educating freed people. This excerpt from her diary explains her experiences during this time.
Many southern governments enacted legislation that reestablished antebellum power relationships. South Carolina and Mississippi passed laws known as Black Codes to regulate Black behavior and impose social and economic control. While they granted some rights to African Americans – like the right to own property, to marry or to make contracts – they also denied other fundamental rights. Mississippi’s vagrant law, excerpted here, required all freedmen to carry papers proving they had means of employment. If they had no proof, they could be arrested, fined, or even re-enslaved and leased out to their former enslaver.
Most histories of the Civil War claim that the war ended in the summer of 1865 when Confederate armies surrendered. However, violent resistance and terrorism continued in the South for over a decade. In this report, General J.J. Reynolds describes the lawlessness of Texas during Reconstruction.
These documents chronicle a case in the wider wave of violence that targeted people of color during Reconstruction. The first document includes Frances Thompson and Lucy Smith’s testimony about their assault, rape, and robbery in 1866. The second document, demonstrates one way that white Southerners denied these claims. In 1876, Thompson was exposed for cross-dressing. For twenty years she successfully passed as a woman. Southerners trumpeted this case as evidence that widely documented cases of violence, sexual and otherwise, were fabricated.
Americans came together after the Civil War largely by collectively forgetting what the war was about. Celebrations honored the bravery of both armies, and the meaning of the war faded. Frederick Douglass and other Black leaders engaged with Confederate sympathizers in a battle of historical memory. In this speech, Douglass calls on Americans to remember the war for what it was—a struggle between an army fighting to protect slavery and a nation reluctantly transformed into a force for liberation.
This print mocks Reconstruction by making several allusions to Shakespeare. The center illustration shows a Black soldier as Othello and President Andrew Johnson as Iago. Johnson’s slogans “Treason is a crime and must be made odious” and “I am your Moses” are on the wall. The top left shows a riot in Memphis and at the top a riot in New Orleans. At the bottom, Johnson is trying to charm a Confederate Copperhead. General Benjamin Butler is at the bottom left, accepting the Confederate surrender of New Orleans in 1862. This scene is contrasted to the bottom right where General Philips Sheridan bows to Louisiana Attorney General Andrew Herron in 1866, implying a defeat for Reconstruction. Click on the image for more information.
This 1870 print celebrated the passage of the Fifteenth Amendment. Here we see several of the themes most important to Black Americans during Reconstruction: The print celebrates the military achievements of Black veterans, the voting rights protected by the amendment, the right to marry and establish families, the creation and protection of Black churches, and the right to own and improve land. Unfortunately, many of these freedoms would be short-lived as the United States retreated from Reconstruction. |
What Is Iron-Deficiency Anemia?
If a patient has anemia, it means that their blood doesn’t have enough healthy red blood cells. Having healthy blood cells is important, as they deliver oxygen to the body’s tissues. A patient with anemia may often feel fatigued and have shortness of breath because their body is not getting enough oxygen. The most common type of anemia is iron-deficiency anemia. When the body does not have enough iron, it causes a deficiency, which in turn causes anemia. Iron deficiency anemia can be easily treated. In rare cases, if left untreated, it can lead to serious complications, such as heart problems in adults and growth problems in children.
What Are the Symptoms of Iron-Deficiency Anemia?
A person may not know they have iron-deficiency anemia at first, or the symptoms may be so mild that a person may ignore them at first. If iron-deficiency anemia isn’t treated, symptoms typically worsen over time. Some of the more common symptoms of iron-deficiency anemia include:
- Shortness of breath
- Unexplained fatigue
- Poor appetite (particularly in infants and young children)
- Cold hands and feet
- Chest pain
- Accelerated heartbeat
- Brittle nails
- Inflammation of the mouth or tongue
- Dizziness and lightheadedness
- Unusual cravings, such as for ice or starch
Sometimes patients may suspect iron-deficiency anemia and begin taking iron supplements on their own. However, iron-deficiency anemia should be diagnosed and treated by a medical professional. This is also to rule out other serious gastrointestinal disorders.
Why Does Iron-Deficiency Anemia Occur?
Iron is a key component and necessary component of hemoglobin, which enables oxygen transport and gives red blood cells their color. A lack of iron means a lack of hemoglobin, which will cause a lack of oxygen reaching organs and tissues in the body. Eventually, iron-deficiency anemia occurs. There are several potential causes of the condition; sometimes, it is the cause of poor diet, but it can also be caused by trauma or other medical conditions.
Blood loss is a common cause of iron deficiency anemia. This can be any type of blood loss, including trauma, menstruation, colorectal cancer, and peptic ulcers, among others. Women who have heavy periods are at risk for iron-deficiency anemia because of the amount of blood loss.
Intestinal disorders can also cause iron-deficiency anemia. Many Intestinal disorders involve the malabsorption of vitamins and minerals, including iron. Celiac disease, for example, interferes with your body’s capability of absorbing iron.
A poor diet that lacks sufficient iron can also contribute to iron-deficiency anemia. Children especially should eat iron-rich foods. To add more iron to your diet, try eating eggs, meat, and leafy greens.
Pregnancy can lead to a temporary complication of iron deficiency anemia. This is in part due to the need for hemoglobin for the fetus.
Am I At Risk for Iron-Deficiency Anemia?
Certain risk factors can increase your chances of iron-deficiency anemia, such as:
- Being vegetarian or vegan. Often, vegetarians and vegans don’t get enough iron in their diet, as one of the primary iron-rich foods is meat.
- Being female. Because of menstruation, pre-menopausal women are in a higher risk bracket.
- Being an infant or child. Infants and young children have a higher risk of iron deficiency anemia, particularly those who were born premature or had low birth weight at delivery.
- Frequently donating blood. If you are a frequent blood donor, blood loss can cause the condition.
How Is Iron-Deficiency Anemia Diagnosed?
If your physician suspects iron-deficiency anemia, they will first draw blood to be analyzed. Your doctor will look for normal hematocrit and hemoglobin levels when the lab returns the results. Lab tests will also check for levels of ferritin (a protein that stores iron) and look at the red blood cell size and color.
If your blood tests determine that iron-deficiency anemia is likely, your provider will want to treat the root cause. Sometimes this requires additional diagnostic tests. Your gastroenterologist may order additional tests, such as:
- Endoscopy. During this procedure, you are put under sedation, and a long, thin tube with a camera attached is inserted into the mouth and down the esophagus. Endoscopy can provide information about your throat, esophagus, stomach, and duodenum (first part of the lower intestine).
- Colonoscopy. This procedure is similar to endoscopy; however, your physician will examine the colon. After you prepare for your colonoscopy by emptying your colon (large intestine) the night before, a small, thin tube is inserted into the colon with a camera attached. This allows your provider to examine your colon, rectum, and anus.
- Ultrasound. Women at risk for iron-deficiency anemia may have a pelvic ultrasound if they have heavy periods with noticeable blood loss.
How Is Iron-Deficiency Anemia Treated?
In many cases, your physician will need to treat the underlying cause of iron-deficiency anemia, such as celiac disease, stomach ulcers, or heavy menstruation, in order to treat iron-deficiency anemia. In most cases, however, your doctor will recommend iron supplementation to boost iron levels in your body. It’s best to take iron supplements on an empty stomach with a vitamin C supplement, as vitamin C assists in the absorption of iron. Do not take iron supplements and antacids at the same time, as antacids can interfere with iron absorption. Wait at least two hours between each dose.
Can I Prevent Iron-Deficiency Anemia?
Many underlying factors that cause iron-deficiency anemia need treatment themselves, however, you can help prevent iron-deficiency anemia by eating iron-rich foods in conjunction with vitamin C-rich foods. Examples of iron-rich foods include leafy greens, meat, eggs, beans, pumpkin seeds, raisins, seafood, and iron-fortified cereals. Examples of vitamin C-rich foods include many fruits, such as kiwi and strawberries, bell peppers, tomatoes, cauliflower, leafy greens, and Brussels sprouts. |
TINDAK TUTUR ILOKUSI DOSEN DENGAN MAHAMAHASISWA DALAM PROSES PEMBELAJARAN MATA KULIAH BAHASA INDONESIA KELAS BK 1A TAHUN AJARAN 2019/2020
Keywords:Speech Acts, Illocution, Indonesian Language Learning
“This study examines the illocutionary acts of lecturers and students in the Indonesian language learning process for class BK 1 A. This study aims to (1) describe the illocutionary acts of lecturers and students in the process of learning Indonesian language BK 1 A, and (2) the purpose of the illocutionary acts of lecturers and students in Indonesian language learning class BK 1 A. This type of research is descriptive qualitative. The research technique used in this study is a competent listening and recording technique. The data collection technique was used by researchers to obtain comprehensive data regarding the types of illocutionary speech acts during the learning process. The data analysis technique in this study uses an interactive model analysis consisting of four paths, namely, data collection, data reduction, data presentation, and conclusion drawing/verification. The results of this study found 78 data on illocutionary speech acts according to Austin, which consist of 6 vermiculite speech acts, 68 exclusive speech acts, and 4 commissive speech acts. The speech acts found have different purposes and factors in speaking according to the situation and context. Every communication will be successful if the speaker understands the speaker's intent.”
Copyright (c) 2023 An-Nas
This work is licensed under a Creative Commons Attribution 4.0 International License. |
The food children eat affects their long term oral health. Some foods have nutrients teeth need. Others are full of acids and sugars that are harmful to teeth. With so many unhealthy food choices being marketed to children every day, it is vital that you take a stand. Offer fun, healthy snacks and model the better food choices you want your kids to make.
Offer healthy snack choices. Kids should have a well-balanced and nutritional diet. This not only promotes overall health but also helps build a strong healthy smile. Nutrition is an important part of oral health. Teaching your kids about eating healthy and limiting sugary foods will help foster a balanced diet from an early age. This will form habits that will result in a lifetime of strong teeth and better overall health.
Have fun with snacks. Promote a nutritious diet by getting creative with snack choices. If you show your kids that healthy snacks are fun, they will be more likely to eat them. Apple slices with peanut butter, fruit smoothies, and yogurt with granola or fruit are great examples of fun, yet healthy combinations. Remember to avoid soda and sugary drinks. These can leave sugars on teeth and can increase the risk of plaque and tooth decay. Water is always the best solution! Eating a well-balanced lunch and dinner is important as well. Make sure to add a variety of fruits and vegetables to every meal so that your kids become accustomed to them.
Be a good role model. Children learn habits by following the example set by their parents. Send your kids the right message by eating plenty of fruits and vegetables yourself. Avoid sugary snacks that can cause cavities or gum disease. Be sure to practice good oral hygiene in front of your kids. If you brush and floss after meals and snacks, your kids will follow the example. Consider brushing together with your child to reinforce good brushing skills and habits. Make sure to brush at least twice a day, after breakfast and before bedtime. If it is possible, try to encourage your child to brush after lunch or after sweet snacks.
Follow up. Don’t forget it is also very important to have regular dental appointments for your child, and model healthy habits by seeing your own dentist regularly. If you have any further questions, feel free to contact us for more ideas on how to promote healthy snacking for great long term dental health! |
Some 4 billion years ago, the Earth was largely covered by a huge ocean. This ocean contained a large number of small organic molecules, which are called “prebiotics” because they were there before life appeared. What were they? Were they synthesized on site or did they come from space? How did they bind to form long polymers, some of them carrying genetic information, others working to reproduce all the basic molecules and then aggregate them into polymers? Nothing was certain in advance! The polymers were not stable, the bonds were difficult to create. Yet here we are. It is therefore good that a chemistry of extreme subtlety found the energy and time necessary to set itself up. This article gives some hints to try to understand what may have happened.
1. A long time ago, in a galaxy not so far away
About 4.6 billion years ago, in an off-centre location in the spiral galaxy we call the Milky Way, a vast disc of matter was formed. Most of the gases, grains, blocks that made up this disc have concentrated and fused, to form a star, the Sun. The little amount of residual matter that remained formed planets and smaller objects, dwarf planets and asteroids. Our Earth, in fact a proto-Earth, was formed at that time. Fifty million years later (not much compared to the scale of astronomical time) this proto-Earth was struck by a very massive object, Theia, a planetoidSmall celestial body with some characteristics of a planet. The term refers to structures as varied as asteroids, dwarf planets, protoplanets, etc. the size of Mars. From this gigantic shock came the Moon and our present Earth .
Before this shock, it is likely that the Earth’s atmosphere contained a lot of hydrogen, the major component of the proto-solar disk. But the shock was huge. The light elements were expelled. A new atmosphere resulted, rich in carbon dioxide (CO2), nitrogen (N2) and water vapour (H2O). The Earth was still very hot. However, it cools down quite quickly. The water vapour condensed and fell in an incessant torrential rain to form a first, unique and immense ocean.
Under this ocean, the upper part of the Earth’s mantle solidified, forming a first crust. The first tectonic plates gradually took place. Perhaps emerging from the ocean, we could already see some proto-continents, scattered islands, probably volcanoes much more active than our current volcanoes. Our planet was still full of energy! This compensated the weakness of the young Sun, which was smaller and less powerful than today. Without the energy released by the planet, without the significant greenhouse effect resulting from the high proportion of CO2 in the atmosphere, it is quite possible that all the water has become ice. What kind of life could have been born in this ice? No doubt, none…
Just as the proto-Earth crossed Theia’s path, after the birth of the Moon, the Earth was struck by many asteroids that probably brought it a good amount of extra water, perhaps also organic molecules. After an intense rebound from these cataclysmic bombardments, they ended 3.8 billion years ago (well, almost ended: we are still not immune to a catastrophic shock. Dinosaurs would not say the opposite!).
It is not impossible that life appeared before the end of this “great late bombardmentPeriod in the history of the solar system extending approximately 4.1 to 3.9 billion years ago, during which there was a significant increase in meteoric or cometary impacts on the telluric planets.“, but the evidence in this regard remains tenuous. And even if it had started, would it have survived these repeated disasters? (see The origin of life as seen by a geologist who loves astronomy).
2. So much water! So much water!
So let us place ourselves, a little less than 4 billion years ago, at the end of a geological era called the Hadean. At that time, the Earth had a gigantic ocean, hyperactive volcanoes, embryos of continents. The Moon was moving away from it, but it was far from reaching its current orbit: it was still three times closer. As a result, the force of the tides was gigantic, more than twenty times greater than it is today. The winds were impressive. Even if it was cooling down, the ocean temperature was probably warmer than it is today.
It is difficult to know its pHAbreviation for Hydrogen Potential, a measure of the activity of the hydrogen ion (or proton) in a solution. The pH is an indicator of the acidity (pH below 7) or alkalinity (pH above 7) of a solution. A solution of pH 7 is called neutral: it is currently slightly basic (around 8.1) but is progressively acidified because of human CO2 emissions. In water, CO2 forms an acid: carbonic acid. There is much less CO2 in the atmosphere today than when life was born. Perhaps the ocean was therefore rather acidic at the time, which had an impact on the chemistry that could occur there. Of course, it contained ions. As today sodium (Na+) and chlorides (Cl–) dominated. The ocean was already salty! There was also calcium, magnesium, bromides, and even much more iodides than today.
At first it was thought that the primitive atmosphere was very reductive, that it contained a lot of hydrogen, methane, ammonia. But we saw it, if there had been hydrogen, the shock with Theia caused the expulsion of this light gas into space. However, the atmosphere was not oxidizing. It contained very little or no molecular oxygen (O2). We can date the appearance of a significant amount of O2 on Earth by determining the age of the oldest deposits containing ferric iron. Indeed, when iron is exposed to water containing oxygen, it rusts. That is to say, it is oxidized to ferric iron (Fe3+). Ferric iron is not soluble in water. Without oxygen, however, iron forms ferrous ions (Fe2+) which are soluble.
There is therefore a major difference between our present ocean and the primitive ocean: the latter contained dissolved ferrous iron, not ours.
3. A fullness of small molecules
This ocean also contained organic molecules. From CO2 or methane (CH4), molecules with two carbon atoms are easily formed. CO2 can be reduced to formaldehyde (H2CO, Figure 1) which by a reaction called “formose reactionWord formed by the contraction of the terms formaldehyde and aldose. This reaction, discovered by the Russian chemist Alexander Boutlerov in 1861, consists in polymerizing formaldehyde to form sugars including pentoses (sugars with five carbon atoms). This reaction is important in the abiotic formation processes of living molecules.” first gives hydroxyacetaldehydeMolecule of chemical structure C2H4O2. It is the simplest molecule that has both a hydroxyl group (OH) and an aldehyde group (CHO). (a molecule with two carbons) and then longer molecules that are sugars. A cook would say that with these sugars in addition to sodium chloride, the Hadaean ocean was sweet and sour!
In addition to sugars, at least two types of molecules are needed to build a living cell: proteins and nucleic acids. These molecules all contain nitrogen. What could be the sources of this nitrogen in the prebiotic ocean? Probably ammonia (NH3), and hydrocyanic acid (HCN). When these two compounds react with formaldehyde they give the simplest amino acid, glycine. This molecule is synthesized through an essential reaction called Strecker synthesis, named after Adolph Strecker, a Germanic chemist, who discovered it in the mid-19th century. This synthesis, from other aldehydes, can give various amino acids, for example serine from hydroxyacetaldehyde (Figure 1).
These amino acids are the basic components of the polymers that are proteins. As we have just seen, they could be synthesized quite easily: they were therefore most likely present in the early ocean.
What about nucleic acid precursors, DNAA abbreviation for deoxyribonucleic acid. DNA is a macromolecule composed of nucleotide monomers formed of a nitrogenous base (adenine, cytosine, guanine or thymine) linked to deoxyribose, itself linked to a phosphate group. It is a nucleic acid, like ribonucleic acid (RNA). Present in all cells and in many viruses, DNA contains genetic information, called genome, that enables the development, functioning and reproduction of living beings. The DNA molecules of living cells are formed by two antiparallel strands wrapped around each other to form a double helix.RNA is a macromolecule consisting of a sequence of ribonucleotides (adenine, cytosine, guanine, guanine, uracil) linked together by nucleotide bonds and performing many functions within the cell. It is a nucleic acid, like DNA., which carries genetic information? Their synthesis is a little more complex. It is less obvious that all of them were present. But it is possible to write prebiotic syntheses for each of them. The riboseRibose is a ose (sugar) made up of a chain of five carbon elements and an aldehyde function. It is a component of RNA used in genetic transcription. It is related to deoxyribose, which is a component of DNA. It is also present in many molecules important in metabolic processes (in particular ATP or adenosine triphosphate). can thus be obtained by the formose reaction already mentioned, nucleic bases from hydrogen cyanide, and direct access paths to ribose-base complexes have recently been published by researchers.
The synthesis of DNA and RNA chains nevertheless raises the question of the source of phosphorus. It is indeed abundantly present in these polymers that carry genetic information. In our world oxidantIn chemistry, a chemical element is oxidizing when it gives one or more electrons during an oxidation-reduction reaction (see also oxidation-reduction and reductant in the glossary). world, this element is generally found in the form of phosphate, especially calcium phosphate, insoluble in water. Were there phosphates (soluble) in the primitive non-oxidizing ocean? If not, what was the source of soluble phosphorus? This is an open question .
Another important element is the sulphur present today in two life-sustaining amino acids, methionine and cysteine. It is released in significant amounts from active volcanoes, fumaroles, many hydrothermal springs, often in the form of hydrogen sulfide (H2S). It is therefore reasonable to assume that the primitive ocean contained hydrogen sulphide, and consequently small sulphur molecules, such as the amino acid cysteine.
If we are certain (some would say: almost certain) that no little green man from intergalactic space has ever set foot on our planet, it is not impossible that some of the molecules mentioned here have landed on Earth, brought by the millions of asteroids that struck it, especially during the Great Late Bombardment. Thus, the magnificent visit of the Rosetta probe to the asteroid 67P/Chourioumov-Guérassimenko, known as “Chouri”, showed that it contained water, ammonia, formaldehyde, hydrocyanic acid, hydrogen sulphide… but also more complex organic molecules including glycine, this small amino acid of which we have described above a possible synthesis on Earth (Read How to study the organic molecules of comets?).
So the first glycine: “terrestrial” or “extraterrestrial”? What about the other amino acids? What about the bases of DNA? No one knows it. But what is certain is that when the massive bombardments stopped, when this possible alien source dried up, when all these alien molecules were used, terrestrial syntheses had to take over. As we have seen, they are quite possible. The extraterrestrial hypothesis, if it cannot be refuted, is not essential to describe the birth of life on Earth.
Figure 2 summarizes all this chemistry.
There remains the problem of the concentration of these molecules. This is a very important question: the more diluted the starting compounds of a given reaction are, the slower the reaction is. Certainly life had time ahead of it. However many products resulting from the condensation of small molecules are not very stable in water. We must be able to do enough, fast enough, so that they continue to grow, to form longer and longer molecules, more and more complex, before separating again into small starting components. This raises two questions: How much water was there on Earth? And this water, what mass of organic molecules did it contain?
How much water? The most realistic hypothesis is to consider that there were, roughly speaking, no more or less than today, i.e. about 1.36 billion km3, i.e. by rounding off a thousand billion billion litres, which is not insignificant!
Knowing how many organic molecules there were on the way to life is much more complicated. The current terrestrial biosphere contains 2,000 gigatonnes (2,000 billion tonnes = 2 billion billion grams) of organic carbon. It is very unlikely that there were more, in the form of small molecules, at a time when precisely “organic” life had not yet appeared.
Let’s do the math: 2 billion billion billion grams divided by 1,000 billion billion litres, that’s 2 milligrams of organic carbon per litre of water. It is a low concentration, but it is not totally ridiculous. It is probable that this leads to an overestimated idea of the concentration of organic molecules in the Hadaean ocean. The actual concentration was probably even lower. So? How can we envisage fairly rapid reactions in this ocean, continuously stirred by gigantic tides, and therefore roughly homogeneous?
Darwin himself had already expressed the problem when he wrote to his friend Joseph Hooker in 1871:” “But if (and oh what a big if) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts, light, heat, electricity etcetera present, that a protein compound was chemically formed, ready to undergo still more complex changes [..] ” »
This is the origin of this small warm little pond, which has truly made so many researchers in search of the origins of life fantasize. Darwin assumed that his small pond is concentrated enough for the chemistry to progress to the synthesis of a long enough chain of amino acids, the “protein compound”.
There may have been small bodies of water on the emerging continents, but were the organic molecules more concentrated there than in the global ocean? Maybe molecules concentrated on the first beaches, or in a few cracks? Were there more organic compounds around the volcanoes? Or at the bottom of the ocean, near hydrothermal vents from which hot gas escapes? Shouldn’t we instead imagine particularly effective reactions that make it possible to build polymers (proteins, for example) even under very diluted conditions?
4. The keys to success: energy and catalysis
For a chemical reaction to occur, it is necessary:
- that it is possible, it is a matter of thermodynamics,
- that it is fast enough, it is a matter of kinetics. in chemistry, kinetics describes the evolution of chemical systems over time, i.e. the passage from an initial state to an end state. The laws of chemical kinetics make it possible to determine the specific velocity of a chemical reaction..
However, a priori, 4 billion years ago, nothing was as it should have been!
From a thermodynamic point of view, what matters is the relative stability of the starting molecules and the formed molecules. Building a polymer is a step-by-step process. First, two monomersBasic constituents of complex molecules (proteins, complex sugars, nucleic acids, etc.). The successive sequence of these molecules (identical or different), gives rise to a polymer structure. Thus amino acids form proteins, oses form complex sugars, nucleotides form nucleic acids. give a dimer, which will be elongated into a trimer and so on, up to very long chains. At first, therefore, two monomers form a dimer and a molecule of water is eliminated. In both peptides (Figure 3) and nucleic acids, dimers are much less stable than monomers. In other words, it is the dimer cut-off reaction (a hydrolysis) that is favoured. The balance is therefore shifted towards the monomers. This especially as this hydrolysis consumes a molecule of water, which in water is favourable, while condensation forms a molecule of water next to the dimer, which is unfavourable. It is this problem of unfavourable formation of a water molecule that leads some authors to seek for the least aqueous environments possible to place their origin of life scenario, the coasts of the first continents in particular, where relatively dry places could undoubtedly be found.
It’s no better on the kinetic side. For two molecules to react together, they must be activated, i.e. they must be supplied with a certain amount of energy. The higher the energy to be supplied, the greater is the probability that two monomers will meet (referred to as “shocks”) without reacting. In other words, the slower is the reaction. However, the energy required to form an embryonic dimer of protein or nucleic acid is high.
Waste of time? No, since despite all this, it is quite certain that life has appeared. To do this, you need at least:
- an energy-rich molecule. By cutting itself into two fragments, this molecule will release a large part of the energy it contains. If this happens simultaneously to the formation of a dimer (e.g. a dipeptide), then the two energies will compensate each other and the overall process will be promoted by thermodynamics;
- a catalyst, i.e. a chemical species, a molecule, but also sometimes the surface of a solid, (see Origin of the first cells: the engineer’s point of view) capable of helping the formation of a dimer from two monomers. The number of effective shocks (those that really form the dimer) will then be much higher, and the reaction will reach a reasonable rate in the prebiotic ocean.
The energy-rich molecule used today in living organisms is ATP (Figure 4), a triphosphate. The breaking of a phosphate bond releases enough energy to counterbalance the instability of the dimers to be synthesized. In peptide synthesis, this will even allow to reach the dimer via intermediates even less stable than the dimer itself. First a mixed phosphoric carboxylic anhydride is formed, then an ester and finally an amide (the peptide). In total, this involves three steps, each of which is inherently slow. Catalysts are therefore involved. Thus, it probably took time for the very first peptides to form in the primitive ocean.
Explaining the appearance of life therefore requires identifying at least one molecular source of energy (knowing that it is highly unlikely that ATP existed in the early ocean, it is too rich in energy), and on the other hand initial catalysts (being understood that those used today in living cells, aminoacyl tRNA synthetasesFamily of enzymes that catalyse the esterification of amino acids on the 3′ end of transfer RNA (tRNA). Preserved in all living organisms, these enzymes help to translate the genetic message into proteins. The amino acids thus added to the end of the tRNAs are then incorporated by the ribosome into the polypeptide chain (protein) being synthesized. There is an aminoacyl tRNA synthetase for each of the 20 amino acids present in proteins. Each of these enzymes recognizes an amino acid and one or more iso-acceptor tRNAs. Their function is essential to the accuracy of the translation of the genetic code, as they ensure that the amino acid thus esterified at the end of the tRNA corresponds to the correct anticodon. and others, are much too complex to be prebiotic).
The problem being posed, everything else is only a hypothesis. The most commonly accepted today is that of the “RNA world” (see Origin of the first cells: the engineer’s point of view). It assumes that the first significant polymers were RNA’s, both repositories of first genetic information and catalystsElement (organic or mineral) that accelerates or slows down a chemical reaction. Used in very small quantities and specific to a given reaction, the catalyst does not appear in the reaction equation; it does not influence the direction of evolution of the transformation, nor the composition of the system in the final state. An enzyme is a biological catalyst.. In fact, some RNA’s in current living cells have catalytic properties (although the vast majority of biological catalysts are proteins). With regard to the energy source: it is impossible to form RNA’s without using the energy contained in triphosphates. However, it is unlikely that triphosphates could have existed in the early ocean. This is one of the difficulties of the hypothesis. But it has the advantage of reconciling genetic information and catalysis.
On the other hand, proteins do not carry genetic information, but they are much better catalysts than RNA. Does this lack of genetic information preclude the idea that proteins could have been the first really important polymers in the history of life? Maybe not… Today indeed, some peptides are manufactured directly on proteins, without the help of nucleic acids. These peptides are called “non-ribosomal” because they are not manufactured in ribosomesA complex composed of RNA and ribosomal proteins, associated with a membrane (at the granular endoplasmic reticulum) or free in the cytoplasm. The function of the ribosome is to translate the genetic code into proteins, through messenger RNAs (mRNAs). The enzymatic activity of the ribosome being carried by rRNAs, the ribosome is a ribozyme. Common to all cells (prokaryotes and eukaryotes), the structure and composition of the ribosome varies according to the organisms. In prokaryotes, the ribosome is said to be 70S (S corresponding to the Sverdberg sedimentation unit) and consists of subunits 50S and 30S. The ribosome of eukaryotes is called 80S, formed by the two subunits 60S and 40S., RNA complexes. However, they are not made “at random” and authors have proposed the existence of a code different from the traditional genetic code (which translates DNA into proteins via RNA). This “non-ribosomal” code translates proteins into peptides (one could say peptides into peptides). It is a complex code, based on sets of ten amino acids (in the coding protein): it allows to precisely choose the amino acid to be introduced into the peptide to be synthesized. There is therefore nothing to prevent us from imagining that “pre-genetic” information, even sketchy information, could have been originally carried by amino acid chains.
Although catalytic proteins are complex molecules, their activity is generally based on fairly simple principles. This is the case for catalytic triads found in hydrolases and transferases. Three amino acids are required: an alcohol or z thiol, a base and an acid. In Figure 5, it is a thiol, cysteine, that acts. Thanks to histidine (the base) located further down the protein chain, this cysteine loses its proton. It then carries a negative charge, which allows it to react with the positive carbon of the C=O double bond. Aspartic acid is there to activate histidine. The reaction product is, in this case, a thioester, another example, after triphosphates, of a high-energy molecule. This thioester may then undergo other reactions, for example with water to make an acid (the protein that contains the triad is then a hydrolase), or with another organic molecule (the triad is the active site of a transferase).
Of course, in our current proteins, these triads are positioned very precisely by the amino acid chains that bind them. Thanks to this, each triad protein is specialized and only performs one type of reaction on molecules that are also well defined. But, 4 billion years ago, in the primitive ocean, could there be triads? They would undoubtedly have been much less specific, to a certain extent “good at everything”. Why not? Figure 6 shows such an example of a very simplified triad. The stereochemistry of the molecule, or “chirality”, is specified: it is a very essential question, which a complete model of the origin of life must explain .
Finally, we cannot imagine life without the existence of partitioned structures, cells or something similar to them, so membranes or walls. The first membranes may have been formed from entangled or more ordered peptides glued together. They may also have contained long hydrophobic organic chains, those of fatty acids (Figure 6). It is possible to synthesize these fatty acids today thanks to proteins and thiolester chemistry.
The world in which life has appeared that is then delineated would be a world of peptides much more than a world of nucleic acids. Sulphur, through cysteine and thioesters, would have played a very central role, linking it to a possible world that is even more primitive, more “mineral”, the iron-sulphur world , . This leads us to reflect on the specific role that ferrous iron, which is soluble, therefore available, could have played in providing electrons, and therefore energy, in relation to all these peptides.
References and notes
Goldford J.E., Hartman H., Smith T.F. & Segré D. (2017) Remnants of an ancient metabolism without phosphates. Cell 168, 1-9. http://dx.doi.org/10.1016/j.cell.2017.02.001
The Encyclopedia of the Environment by the Association des Encyclopédies de l'Environnement et de l'Énergie (www.a3e.fr), contractually linked to the University of Grenoble Alpes and Grenoble INP, and sponsored by the French Academy of Sciences.
To cite this article: VALLÉE Yannick (April 2, 2019), Once upon a time when life appeared: chemistry in the earth’s ocean 4 billion years ago, Encyclopedia of the Environment, Accessed September 30, 2023 [online ISSN 2555-0950] url : https://www.encyclopedie-environnement.org/en/life/once-upon-a-time-life-chemistry-in-earths-ocean-4-billion-years-ago/.
The articles in the Encyclopedia of the Environment are made available under the terms of the Creative Commons BY-NC-SA license, which authorizes reproduction subject to: citing the source, not making commercial use of them, sharing identical initial conditions, reproducing at each reuse or distribution the mention of this Creative Commons BY-NC-SA license. |
Math SEALs “SEMA 4”
About: The Math SEALs “SEMA 4” unit teaches students the order of operations in a fresh and interactive way (Patent Pending). In the process, students will also learn how to solve equations and expressions containing symbols of inclusion and exponents.
Engagement & Tech-flexible Integration: This unit incorporates crafts, games, wordart, Canva, movements, and open badges.
Purchase SEMA 4 Order of Operations for your class, afters school program or camp today!
MA.4.AF.3 Understand that multiplication and division are performed before addition and subtraction in expressions without parentheses.
MA.5.C.10 Use the order of operations to solve numerical equations and expressions
MA.6.NS.11 Understand and compute whole number power of whole numbers.
MA.6.AF.4 Interpret and evaluate mathematical expressions that use grouping symbols such as parentheses.
MA.6.AF.7 Apply the correct order of operations and the properties of real numbers (e.g., identity, inverse, commutative, associative, and distributive properties) to evaluate numerical expressions. Justify each step in the process.
MA.7.NS.4 Understand and apply the concept of square root of a whole number, a perfect square and an imperfect square.
Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols.
Explain patterns in the number of zeros of the product when multiplying a number by powers of 10, and explain patterns in the placement of the decimal point when a decimal is multiplied or divided by a power of 10. Use whole-number exponents to denote powers of 10.
Write and evaluate numerical expressions involving whole-number exponents.
Evaluate expressions at specific values of their variables. Include expressions that arise from formulas used in real-world problems. Perform arithmetic operations, including those involving whole-number exponents, in the conventional order when there are no parentheses to specify a particular order (Order of Operations). For example, use the formulas V = s3 and A = 6 s2 to find the volume and surface area of a cube with sides of length s = 1/2.
SEMA4, step 2 (see unit guide)
SEMA4, step 3 (see unit guide)
SEMA4, step 4 (see unit guide) |
Development of new prescription drugs and antidotes to toxins currently relies extensively on animal testing in the early stages of development, which is not only expensive and time consuming, it can give scientists inaccurate data about how humans will respond to such agents.
But what if researchers could predict the impacts of potentially harmful chemicals, viruses or drugs on human beings without resorting to animal or even human test subjects?
To help achieve that, a team of scientists and engineers at Lawrence Livermore National Laboratory is developing a "human-on-a-chip," a miniature external replication of the human body, integrating biology and engineering with a combination of microfluidics and multi-electrode arrays.
The project, known as iCHIP (in-vitro Chip-based Human Investigational Platform), reproduces four major biological systems vital to life: the central nervous system (brain), peripheral nervous system, the blood-brain barrier and the heart.
"It’s a testing platform for exposure to agents whose effects are unknown to humans," said LLNL engineer Dave Soscia, who co-leads development of the "brain-on-a-chip" device used to simulate the central nervous system. "If you have a system that is engineered to more closely replicate the human environment, you can skip over the really lengthy process of animal testing, which doesn’t necessarily give us information relevant to humans."
The iCHIP team is focusing its efforts on the brain, where they’re looking to understand how neurons interact with each other and react to chemical stimuli such as caffeine, atropine (a drug used to treat poisonings and cardiac arrest) and capsaicin, the compound that gives chili peppers their hotness, as well as real chemical agents in the Lab’s Forensic Science Center.
Unique to the iCHIP platform is combining multiple brain types on the same device without barriers between those regions. To study the brain, primary neurons are funneled or "seeded" onto a microelectrode array device, which can accommodate up to four brain regions (such as the hippocampus, thalamus, basal ganglia and cortices). After the cells grow, a chemical (atropine for example) is introduced and the electrical activity from the neurons is recorded.
"The idea is that we can look at network-wide effects across different brain regions," Soscia said. "It adds a level of complexity that has never been done before."
Preliminary results have shown that hippocampal and cortical cells can survive on the chip for several months while their responses are recorded and analyzed, Soscia said.
Filtering out chemicals and toxins before they reach the central nervous system in the body is accomplished by the blood-brain barrier, which is being reproduced by a team led by LLNL engineer Monica Moya. The device uses tubes and microfluidic chips to simulate blood flow through the brain. Moya and her team are testing the device with caffeine and other agents to ensure the system is performing and the cells are reacting as they would in a human body.
The blood brain barrier is the brain’s gatekeeper, allowing nutrients to enter in the brain from the blood flow while keeping out potential toxins. It works so well that it unfortunately can also block potentially useful therapeutics to the central nervous system," Moya said. "Having a realistic human lab model of the blood-brain barrier will help researchers study the barrier’s permeability and be incredibly useful as an in vitro model for drug development."
The iCHIP research, Moya said, could have implications for creating new drugs to fight cancer, vaccines or evaluating the efficacy of countermeasures against biowarfare agents.
Lab scientist Heather Enright is leading research into the peripheral nervous system (PNS), which connects the brain to the limbs and organs. The PNS device has arrays of microelectrodes embedded on glass, where primary human dorsal root ganglion (DRG) neurons are seeded. Chemical stimuli such as capsaicin (to study pain response) then flow through a microfluidic cap to stimulate the cells on the platform.
The microelectrodes record electrical signals from the cells, allowing researchers to determine how the cells are responding to the stimuli non-invasively. Microscopic images can be acquired at the same time to monitor changes in intracellular ion concentrations, such as calcium. This platform is the first to demonstrate that long-term culture and chemical interrogation of primary human DRG neurons on microelectrode arrays is possible, presenting researchers with an advantage over current techniques.
"Traditionally, electrophysiology studies are done with patch clamping, where the cell is perforated and damaged," Enright said. "A multi-electrode array approach, like that used on iCHIP, really allows you to interrogate the cells over multiple trials so we can maximize the data we get from them. This is especially important when testing rare primary human cells. When you’re looking at exposure to an unknown chemical for instance, the cells’ response may be different weeks or months compared to hours after exposure. This is a non-invasive way of assessing changes in their health and function over time."
Additionally, early research is being done to replicate the heart on a chip. Cardiac cells have already been shown to successfully "beat" in response to electrical stimulation, with the intent to simultaneously measure the electrophysiology and movement of the cells.
The next step, according to iCHIP principal investigator Elizabeth Wheeler, is integrating all the systems together to create a complete testing platform outside the human body to study some fundamental scientific questions.
"The ultimate goal is to fully represent the human body," Wheeler said. "Not only can the iCHIP provide human-relevant data for vaccine and countermeasure development, it also can provide valuable information for understanding disease mechanisms. The knowledge gained from these human-on-a-chip systems will someday be used for personalized medicine."
The Laboratory Directed Research and Development (LDRD) program is funding the iCHIP project.
thomas244 [at] llnl.gov |
There are two main families which contain birds with ‘blackbird’ in their name; Icteridae and Turdidae. The name ‘blackbird’ is somewhat of a catch-all for many birds with nearly all-black plumage, but these two families are barely related at all! Nevertheless, blackbirds are a familiar sight throughout much of the world. Here, we’re going to answer the question, “how long do blackbirds live?”
Blackbirds are short-lived birds that typically only live for 2 to 4 years. In the UK, the Common blackbird has a life expectancy of around 3.4 years, and it’s a similar story regarding the non-related North American blackbirds, such as the Red-winged blackbird, which has a life expectancy of just 2.4 years. But, of course, they can live much longer - the oldest blackbirds recorded in the wild are aged between 15 to 20.
The low life expectancy of blackbirds is mainly due to high nestling, fledgling and juvenile mortality rates. Both the Common blackbirds and many North American blackbirds have a year-on-year survival rate of between just 40% and 70%. In addition, young blackbirds have just a 37% chance of reaching adulthood.
Unfortunately, many young blackbirds will never leave the nest, and their adult life is fraught with perils. Despite high mortality rates, blackbirds are still a common and successful bird, and their populations are generally stable.
Read on to learn more about the lifespans of these well-known garden birds!
Red-winged Blackbird, found in the US
The typical lifespan of a Common or European blackbird is 3.4 years. While this seems low, it’s about average for songbirds which generally have short lives and fast lifecycles.
Many blackbirds fail to fledge from the nest, and even if they do, their year-on-year survival rate is only about 50% to 70%. Around 50% of all nests fail, which is partly why blackbirds have 2 to 3 broods a year to compensate. While many blackbirds don’t live for long, many older individuals have been recorded living for longer than 10 years, or even 20 years or longer!
Like with many birds, the lifespan of a blackbird is a game of chance. If they can avoid predators and find plenty of food in a relatively safe and stable environment, they have a good chance of surviving for longer than their allotted 3.4 years.
The lifespans of other blackbirds are very similar to the Common blackbird. For example, the North American Red-winged blackbird typically only lives for 2.4 years. The Indian blackbird, which is related to the Common blackbird, also only lives for around 2 to 4 years.
Blackbird singing in the trees
Blackbirds are often killed by predators like hawks, owls, cats and foxes. Their nests may also be predated by corvids, like the magpie, jay and crow, as well as cuckoos. Blackbirds are known to abandon their nests at the whiff of danger too, which means any unfledged birds will likely die from starvation.
Most blackbirds die in the nest, not too long after hatching. Nestling survival rates are about 50%, so almost half of all nestlings fail to fledge. Studies show that the heavier the hatchling at birth, the higher its survival rate generally is.
But their perils don’t stop there; many fledglings will also die before they are one year old. After then, the year-on-year survival rate of adult blackbirds goes up a little, climbing to around 70% in the UK. So, each year, only 7/10 adult blackbirds will survive.
While this seems like doom and gloom, the rapid lifecycle of the blackbird helps it thrive in the face of adversity. Birds can fledge in just nine days, which is phenomenally fast compared to many other birds. Like many other small songbirds, blackbirds live life in the fast lane!
A juvenile Eurasian Blackbird (Common Blackbird)
Blackbirds have super-fast lifecycles. From the moment an egg is laid, many blackbirds will hatch and leave the nest within the next month, long before the end of the breeding season. This enables blackbirds to raise as many as three broods per year.
The Common blackbird faces numerous threats on both land and in the air. Many nests fail through predation, and fledglings are especially vulnerable to land predators during their first month outside of the nest.
Blackbirds’ main predators are:
As well as animals, blackbirds are often infested with ticks. As many as 70% of blackbirds in rural France were found infested with at least one type of tick. While ticks are not predators to blackbirds, they’re still a significant cause of ill health.
Blackbird eating berries
The oldest Common blackbird was at least 20 years and 3 months old. This is vastly above the blackbird’s typical 2 to 4-year lifespan and just goes to show that the biological life expectancy of many birds is much, much higher than their average lifespan.
The oldest Red-winged blackbird was a similar 15 years, 9 months old, and was banded in New Jersey in 1967.
Blackbirds will attempt to feed every day, and will usually be successful during the summer and winter at least. They can probably live for around 2 to 3 days without food but will become slow and lethargic.
Most small birds are metabolically active for most of the day and need to feed relatively regularly.
A male blackbird with two recently fledged blackbird chicks
Blackbirds often roost in tree and wall cavities during winter. In addition, many blackbirds choose to roost communally in small flocks, huddling for warmth. Nesting boxes also provide warm shelter for winter birds.
Blackbirds further north than the UK, e.g. those in Scandinavia, will typically head south and west during the winter to find warmer wintering grounds.
Get the latest Birdfacts delivered straight to your inbox
© 2023 - Birdfact. All rights reserved. No part of this site may be reproduced without our written permission. |
Note: When cooking meat or eggs at home, there are three temperatures that are very crucial to remember: Eggs and all ground meats must be cooked to a temperature of 160 degrees Fahrenheit; poultry and birds must be cooked to 165 degrees Fahrenheit; and fresh meat steaks, chops, and roasts must be cooked to 145 degrees Fahrenheit.
What temperature is appropriate for serving food?
It is best practice to keep hot foods at an interior temperature of at least 140 degrees Fahrenheit.
What is the ideal internal temperature for reheating and cooking?
The most efficient methods for removing potential bacterial threats from food are cooking and reheating the dish. When food is cooked for long enough at a high enough temperature, the vast majority of the bacteria and viruses that cause foodborne illness may be eliminated. The internal temperature of the meal should be at least 75 degrees Celsius.
What degree of heat should the reheated core have?
After being taken from the refrigerator, the food needs to be promptly reheated and done so within two hours. When foods are reheated in a microwave oven, the internal temperature of the item must be brought up to at least 165 degrees Fahrenheit during the whole process.
How hot does bacteria die?
It is a fallacy to believe that germs cannot survive at temperatures lower than 40 degrees. In point of fact, the development of bacteria is delayed but not prevented entirely. Cooking food at temperatures of 165 degrees Fahrenheit or above is the only way to ensure that all microorganisms have been eliminated. Bacteria are killed off in situations that are extremely acidic, such as pickle juice.
What internal temperature is required to safely hold hot food while avoiding the spread of foodborne pathogens?
Hot hold units are designed to maintain an inside temperature of at least 60 degrees Celsius, or 140 degrees Fahrenheit. When the temperature is at or above this level, germs cannot flourish.
What Celsius temperature eradicates bacteria from food?
At temperatures below 8 degrees Celsius and over 63 degrees Celsius, bacterial growth is halted. These are the appropriate temperatures for storing food. Above 100 degrees Celsius, bacteria are destroyed (boiling point). Bacteria will not be able to grow at a temperature of -18 degrees Celsius (freezing temperature), although they may still be alive.
What temperature causes E. coli to die?
160°F/70°C — Temperature needed to kill E. coli and Salmonella.
When does salmonella die?
According to what she stated, eggs need to be cooked to a temperature of 160 degrees Fahrenheit in order to destroy salmonella. At that temperature, they no longer have a watery consistency.
Does cooking kill E. coli?
There is a normal occurrence of E. coli in the digestive tracts of both humans and animals. The germs are typically destroyed by the cooking process, but meat that has been ground or tenderized offers a larger threat since the pathogens are spread throughout the flesh.
What temperature falls within the human danger zone?
The term “danger zone” refers to a temperature range that is hazardous for foods to be stored at, as the name of the term implies. This range extends from -40 degrees Fahrenheit to +60 degrees Celsius.
What is the Celsius value of the danger zone?
Temperature Danger Zone refers to the range of temperatures ranging from 5 degrees Celsius to 60 degrees Celsius. This is due to the fact that in this zone germs that cause food poisoning can proliferate to harmful levels, which can make you sick.
What is the Fahrenheit range at which the majority of bacteria stop growing?
The majority of bacteria can endure temperatures between 0 and 4 degrees Celsius (or 32 and 40 degrees Fahrenheit), although their rate of multiplication slows significantly. At 0 degrees Celsius (or 32 degrees Fahrenheit), water begins to freeze. The majority of bacteria can endure temperatures between 0 and 18 degrees Celsius (or 32 and 0 degrees Fahrenheit) without being able to thrive.
What temperature renders pathogens inert?
The majority of bacteria may be eliminated by heating them to temperatures that are at least 140 degrees Fahrenheit. Because the majority of germs are most active between 40 and 140 degrees Fahrenheit, it is essential to either store food in the refrigerator or cook it thoroughly at high temperatures. Germs may survive temperatures as low as -40 degrees Fahrenheit for an indefinite amount of time before going dormant.
Why is it always cold in hospitals?
The use of extremely low temperatures in hospitals helps prevent the growth of microorganisms. Because bacteria and viruses flourish in warm temperatures, maintaining temperatures that are chilly helps to restrict the growth of bacterial and viral populations. The operating rooms at a hospital are often where the temperature is kept at its lowest to ensure the lowest possible infection rate.
At what temperature does chicken kill salmonella?
Cooking chicken to an internal temperature of 165 degrees Fahrenheit is the most effective technique to eliminate any bacteria that may be present on raw chicken, including salmonella. This temperature eliminates any bacteria, including salmonella, that may be present on the chicken.
Which microorganisms can endure boiling water?
Some bacterial spores that are not typically associated with water-borne diseases, such as clostridium and bacillus spores, are capable of surviving boiling conditions. However, research has shown that water-borne pathogens are inactivated or killed at temperatures below boiling (212 degrees Fahrenheit or 100 degrees Celsius).
What bacteria are resistant to cooking?
aureus is given the opportunity to develop in foods, it has the potential to generate a toxin that makes people sick. Cooking kills the bacterium, but the poison that it produces is heat resistant and may not be removed by the cooking process.
What temperature is safe for eating eggs?
The Food and Drug Administration Food Code recommends that eggs intended for immediate service be cooked to an internal temperature of 145 degrees Fahrenheit, and that temperature should be maintained for 15 seconds. This recommendation is included in the Food and Drug Administration’s set of recommendations for establishments that provide food service.
What temperature renders ground beef E. coli-free?
However, the CDC and the USDA recommend that people cook ground beef to a temperature of 160 degrees Fahrenheit. Because it is easier to adhere to one standard (temperature) than it is to adhere to two, different recommendations have been made for customers (temperature and time). E. coli may be effectively eradicated from ground beef by heating it to 160 degrees Fahrenheit.
Can vinegar be used to wash lettuce?
It has been demonstrated that reducing bacterial contamination in water by first adding vinegar (at a ratio of half a cup of distilled white vinegar to one cup of water), then rinsing the water with clean water, may be done. However, this may change the way the water feels and tastes. After washing, remove any extra moisture by blotting with paper towels or spinning the produce in a salad spinner.
Can bone broth be left out overnight?
There is no way to salvage soup that has been left out at room temperature for more than two hours, no matter how tempted you may be or how many times you’ve managed to avoid disaster in the past. Keep in mind that broth is inexpensive, whereas poisons are dangerous.
Can you get rid of E. coli by washing lettuce?
Rogers warns that washing lettuce in water (or water combined with baking soda) may help remove pesticide residue, surface dirt and debris from produce; however, washing has not been proven to be an effective way to remove E. coli and other related bacteria. Washing lettuce in water may help remove pesticide residue, surface dirt and debris from produce.
What does the 2/4 Rule mean?
Even if potentially dangerous food has been out of the refrigerator for two or four hours, following the two-hour or four-hour guideline can help ensure that the food is still safe to eat. The speed at which bacteria multiply in food at temperatures ranging from 5 to 60 degrees Celsius served as the basis for the rule, which has been validated by scientific research. 2-hour / 4-hour rule.
Why should food be cooked until it reaches a minimum core temperature?
It is possible for raw foods, such as meat, fruit, and vegetables, to contain high quantities of bacteria either because they have been contaminated with dirt or because they have not been prepared properly. To ensure that bacteria are killed, food should be cooked until it reaches an internal temperature of at least 75 degrees Celsius and should be simmered for at least two minutes.
What is the refrigerator’s maximum safe temperature?
Maintain a temperature in the refrigerator that is at or below 40 degrees Fahrenheit (4 degrees Celsius). The temperature of the freezer need to be 0 degrees Fahrenheit (-18 degrees Celsius).
What temperature promotes pathogen growth?
Temperatures in the “temperature danger zone,” which range from 41 to 135 degrees Fahrenheit (5 to 57 degrees Celsius), are optimal for the growth of foodborne pathogens (TDZ). They do best at temperatures ranging from 21 to 40 degrees Celsius (70 to 104 degrees Fahrenheit). The vast majority of microorganisms that can cause food poisoning are aerobic, which means that their growth is dependent on oxygen.
How long can meat be left at 50°C?
The Food and Drug Administration (FDA) in the United States advises people to adhere to the “2-Hour Rule” According to the “2-hour rule” any perishable food that has been left out at room temperature for more than two hours must be thrown away.
Is salmonella resistant to cooking?
The longer answer is that, yes, Salmonella can be killed by cooking. The Centers for Disease Control and Prevention recommend heating food to a temperature that is between 145 degrees Fahrenheit and 165 degrees Fahrenheit to kill Salmonella. This temperature range is dependent on the kind of food.
How hot must it be to kill the flu and cold viruses?
A simple technique to purify the water is to heat it to a temperature that is at least 145 degrees Fahrenheit higher than the minimum boiling point of 212 degrees! According to the World Health Organization, bringing the temperature of the water to a boil will eliminate harmful contaminants like viruses and bacteria that may be present in it.
Can meat be cooked to kill bacteria?
By cooking poultry and beef to an appropriate internal temperature, you may eliminate any germs that may be present. Checking the temperature requires the use of a cooking thermometer. The color of the meat or the consistency of its fluids are not reliable indicators of whether or not it has been cooked thoroughly. Within two hours of the meal being prepared, any leftovers should be placed in the refrigerator and chilled to at least 40 degrees Fahrenheit.
Why are hospitals so starkly white?
White, the color most commonly associated with cleanliness and purity, was used to paint nearly every surface in hospitals and clinics throughout that time period. When Dr. Sherman was in the middle of doing many operations at St. Luke’s San Francisco hospital, he noticed that the contrast of the blood against the white sheets, walls, and staff uniforms was too stark.
How do physicians avoid being ill all the time?
Many of the things that physicians recommend their patients do to maintain a healthy lifestyle are things that doctors also do themselves. This helps them avoid being ill any further than is absolutely required. They maintain a healthy diet and get enough of sleep each night. They also engage in physical activity and make an effort to keep their bodies in tip-top form. They perform frequent hand washing at their place of employment.
Why are hospitals always so dull?
As a result of this, many get the impression that the air in hospital wards, which undergo between two and four air changes every hour, is overly dry. The high indoor air temperature and/or the high concentration of particulate matter in the air are the two factors that contribute the most frequently to the feeling that the air in the hospital wards is dry.
How hot must it be for staph to die?
When maintained at a temperature of 140 degrees Fahrenheit for 78 to 83 minutes, the same level of destruction is reached in similarly infected foods. According to the methodologies used for computation, it is anticipated that an exposure of forty-five minutes at a temperature of one hundred forty degrees Fahrenheit would be required to reduce the number of organisms per gram to undetectable levels.
Do onions that have been cooked kill salmonella?
According to Dr. Stephen Amato, an expert on food safety and the Director of Global Regulatory Affairs and Quality Assurance Programs at Northwestern University, cooking onions to an internal temperature of 150 degrees Fahrenheit would eradicate any salmonella that may be present.
Are peanut butter and salmonella compatible?
According to Mr. Doyle, “What We’ve Learned Is That Peanut Butter Needs Heat Over 190 Degrees Fahrenheit for Over 40 Minutes to Kill Salmonella.” However, Such Prolonged Heating Times May Affect Product Quality.
Can bleach be used to boil water?
**Warning: water contaminated with fuel or other toxic chemicals cannot be made safe by boiling or disinfecting with bleach.
How to disinfect water with bleach.
|Chlorine Disinfection Available Chlorine||Drops per Quart/Gallon of Clear Water||Drops per Liter of Clear Water|
|7-10%||1 drop per Quart – 4 per Gallon||1 per Liter|
Can you drink boiled river water?
Boil. If you don’t have access to clean water in a bottle, you should boil the water in your tap in order to make it drinkable. The most effective way to eliminate disease-causing microorganisms, such as viruses, bacteria, and parasites, is to boil the food or liquid in question.
How long does boiled water remain potable?
Water that has been boiled can be stored in the refrigerator for three days if the container is cleaned, sanitized, and securely sealed. However, water can only be stored at room temperature for twenty-four hours if it is kept out of direct sunlight.
Why shouldn’t hot food be stored in the refrigerator?
Temperatures between 41 and 135 degrees Fahrenheit are optimal for the development of harmful microorganisms. The term “danger zone” refers to this particular range when discussing meals. At these temperatures, potentially dangerous bacteria can multiply at their quickest rate. Bringing the temperature of the refrigerator up into this potentially hazardous range can happen if you store significant quantities of hot items within.
The meat with the most bacteria is?
The severity index for ground beef was the highest out of all 12 categories of meat and poultry. This is due to the fact that it is expected that roughly half of those sick would end up in the hospital as a result of the virus. In addition, Clostridium perfringens and Salmonella are both linked to infections that have been linked to ground beef.
Can you eat food that has been outside all day?
The United States Department of Agriculture recommends discarding any food that has been out of the refrigerator for more than two hours. At room temperature, bacteria multiplies quite quickly and has the potential to make people sick. Something that has been left out at room temperature for more than two hours, even if it has been rewarmed, is likely to be contaminated with germs.
Can you eat hard-boiled eggs that are two weeks old?
Eggs that have been hard-boiled can be stored in the refrigerator for up to a week. You should not consume ruined eggs since doing so can make you sick. A damaged egg will have a distinct odor, and its consistency will be slimy or chalky.
Why eggs shouldn’t be kept in the refrigerator?
Eggs, according to the recommendations of experts, should be kept at room temperature. Eggs can become inedible if they are kept at a temperature that is very cold, such as non the refrigerator. Putting eggs in the refrigerator causes the growth of bacteria on the shells of the eggs, which in turn causes the bacteria to penetrate the insides of the eggs, rendering them inedible.
What degree of heat is ideal for meat?
Note: When cooking meat or eggs at home, there are three temperatures that are very crucial to remember: Eggs and all ground meats must be cooked to a temperature of 160 degrees Fahrenheit; poultry and birds must be cooked to 165 degrees Fahrenheit; and fresh meat steaks, chops, and roasts must be cooked to 145 degrees Fahrenheit. Check the temperature with the use of a thermometer.
In a freezer, can bacteria develop?
Maintaining a consistent temperature as well as a high level of cleanliness in your kitchen is essential for preventing the spread of listeria. In recent weeks, there has been a lot of coverage on the bacterium Listeria, which has been linked to ice cream, frozen veggies and fruit, and other frozen foods. In contrast to the majority of germs, Listeria is able to live and replicate in cold environments like your freezer and refrigerator.
What three meat grades are there?
Consumers have the greatest level of familiarity with the first three quality classes, which are known as Prime, Choice, and Select, and the USDA considers these labels to be of food-grade quality.
Can you eat hamburger that is raw?
Consuming raw or undercooked ground beef is risky due to the potential presence of pathogenic microorganisms in both preparation methods. It is strongly advised by the Department of Agriculture of the United States of America that raw or undercooked ground beef not be consumed or tasted. Cooking hamburgers, meatloaf, meatballs, and casseroles to a temperature of 160 degrees Fahrenheit will ensure that any bacteria present have been eliminated.
Do you soap-wash apples?
It is not suggested to use soap, detergent, or any kind of commercial produce wash for washing fruits and vegetables. On fruits and vegetables, you should never use bleach solutions or any other kind of disinfection chemical. Before you prepare or consume the food, cut away any sections that are damaged or bruised.
Should onions be washed?
If you’re going to be cooking the onions, there’s no need to wash them, but if you’re going to be eating them raw, it’s a good idea to give them a thorough scrub. Onions, despite the fact that they are less likely to be contaminated with pesticides or bacteria, may nevertheless contain bacteria, dirt, or unsavory compounds. One easy step you can take to maintain your ultra-healthy lifestyle is to give your onions a brief rinse before you cut them up.
How are soiled grapes cleaned?
Put your grapes in a bowl, and then sprinkle them with a teaspoon of baking soda and a teaspoon of salt. Mix well. First, shake the dish so that each grape is uniformly coated, and then thoroughly rinse the grapes in cold water. After drying them off with a clean towel, you may enjoy them as a snack or give one of our mouthwatering dishes that features grapes a try, such as this roasted grape galette.
My soup became sour; why?
A wide variety of bacterial species, in addition to some other kinds of microorganisms, are capable of producing waste products with a “sour.” flavor. Because of the potential for microbial development in soup and stock, the vast majority of organizations concerned with food safety advise against storing soup for longer than three to four days in the refrigerator.
How long should bones be boiled before making broth?
Bring to a boil, then lower the heat so that it is just barely simmering and cover. Cook for at least 10 to 12 hours, or until reduced by a third or a half, giving you 6 to 8 cups of bone broth at the end of the cooking process. When it is reduced further, the flavor becomes more concentrated, and a greater quantity of collagen may be removed. We think that twelve hours is the ideal amount of time to prepare something.
Why is the soup in the refrigerator bubbling?
The reaction between the starch and the water molecules takes place at extremely high temperatures and results in an increase in surface tension. This, in turn, leads to the formation of microscopic bubbles or pockets of air that are surrounded by the starch and finally results in foam. |
Part of Liquid State, an occasional series on our relationship with water.
Water was all around when John Moores was growing up in Newfoundland. It was part of the scenery and part of his identity. These days, water is still a big part of his life. The difference is that Dr. Moores now spends much of his time thinking about water that is millions of kilometres from home.
An assistant professor of space engineering at York University, Dr. Moores is a participating scientist with the Mars Science Laboratory, the $2.5-billion (U.S.) project that deposited NASA’s Curiosity rover into a Martian crater last summer and is yielding new insights into what once transpired there.
“Water is a big part of it,” Dr. Moores said in an interview with The Globe and Mail. “We really want to understand what the water story of Mars is: how it changed from being a warm and wet planet in the past to being the arid planet it is today, and whether or not it periodically comes back to life.”
The motivation is as big as it gets in science. Of all the questions humans have sought to answer about the cosmos, none are as potent as: “Are we alone?” After more than half a century of exploring the solar system, we still do not know if life, as a phenomenon, is unique to Earth, and what that implies about our chances of finding other civilizations some day among the stars.
For Curiosity – which landed a year ago on August 5 – the road to ET is a wet one.
In recent weeks, Curiosity’s explorations have literally shifted into high gear as it begins a series of long drives toward a mysterious mountain that promises the most complete record yet of the planet’s aqueous history.
Here on Earth, water is essential to biology. It is the universal solvent that, billions of years ago, allowed the chemical building blocks of life to interact and somehow assemble themselves into the first self-sustaining organisms. Ever since, evolution has driven life into a dizzying array of new and surprising forms – infinitely diverse, but inevitably dependent on water.
Scientists have known for years that there is water on Mars too, albeit mostly frozen solid at the poles or locked in subsurface permafrost. But there is also ample evidence that water flowed freely on Mars billions of years ago, raising the possibility that life may have once gained a foothold there.
Dr. Moores already has his own history with water on the red planet. While working on the Phoenix mission, which landed on the planet’s northern plain five years ago, he was among the first to detect fog on Mars – a fitting claim to fame for someone who hails from St. John’s.
These days, his work with Curiosity involves tracking the movement of water vapour through the thin Martian atmosphere in response to seasonal changes. With spring having just arrived at the rover’s location in Gale Crater on July 31, Dr. Moores expects to see more evidence of water overhead.
“It does start to pick up now that we’re moving into warmer temperatures in the Northern Hemisphere,” he said. For the rover, that still means daytime highs are below zero C, but as the Martian north pole is increasingly exposed to sunlight, the frozen water there evaporates directly into the atmosphere and forms clouds that migrate around the planet.
Getting a handle on this movement of water is important because water in the atmosphere interacts chemically with surface rocks and can skew measurements designed to reveal the much more ancient history of water – and potentially life – on Mars. Day by day, Dr. Moores and his colleagues watch how minute quantities of water are moving around in the Martian atmosphere, and in doing so help to decode a bigger story.
Ralf Gellert, a physicist at the University of Guelph, is working that story from the other side. He leads the rover’s alpha-particle-X-ray spectrometer (APXS), a device that measures the elemental composition of Martian rocks and can help identify minerals that formed in the presence of water billions of years ago.
As the creator of two similar instruments that landed with the Spirit and Opportunity rovers when they touched down in 2004, Dr. Gellert has spent nearly a decade absorbed in the daily business of exploring Mars in more than one location at a time.
What Curiosity has added to the picture has been “amazing,” Dr. Gellert said, with the new results clearly showing Mars was once far more habitable than it is today. While Opportunity found places where Mars was once likely covered with evaporating lakes of acidic water, Curiosity has tapped into an even older geologic era, perhaps 3.7 billion years ago, when at least some of the water on Mars was more like freshwater found on Earth today.Report Typo/Error |
Omnivore. Legumes, acorns, seeds, buds, tubers, radishes, wild persimmons, larvae and adult insects.
Habitat: Evergreen, deciduous or mixed coniferous forests, areas of long grass and bushes.
Incubation: 24-25 days / 6-9 eggs
Social structure: Groups of 6-21 birds.
Weight: male 1530g, female 950g
Dimensions: male 210cm, female 150cm
Lifespan: 9,2 max
Estimated population in the wild: 3,500-15,000
Threats: Continuing deforestation which is reducing and fragmenting their habitat, hunting for food, collection of their eggs.
IUCN Status: Vulnerable.
Did you know that:
- They have the longest tail of all birds, reaching 160cm or exceptionally 200cm in oldest males.
- In the past they were hunted for their long tail feathers, which were used as a decoration in the Peking opera costumes, but plastic feathers are increasingly being used for this purpose. |
Chapter 4, Lesson 1 Activity Sheet Answers . 1. Electron . Nucleus . 2. Proton no charge . Electron positive charge . Neutron negative charge . 3. Two protons . repel Two electrons . repel. A proton and an electron . attract . 4. What happened when you brought the following materials near each other?Chapter 4 Activity Sheet Flashcards | Quizlet
Chapter Four Student Activity Sheet The Debt Snowball is available in our digital library an online access to it is set as public so you can download it instantly Our books collection saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one Merely said, the Chapter Four Student ActivityDownload Chapter 4 Making The Minimum Student Activity ...
Chapter 4 Student Activity Sheet Chapter 4 Student Activity Sheet If you ally dependence such a referred Chapter 4 Student Activity Sheet book that will present you worth, acquire the categorically best seller from us currently from several preferred authors. If you desire to hilarious books, lots of novels, tale, jokes,Chapter 4, Lesson 4 Activity Sheet Answers
Chapter 4 Student Activity Sheet The Pearl Listening Task - Chapter 4 (Audiobook with timed Questions - Student Task Sheet) An Audiobook reading performance of The Pearl by John Steinbeck with associated questions visible to students as the...Chapter 4, Lesson 6 Activity Sheet Answers
Chapter 4: Expectations | Task 1: Clarify CHAMPS Expectations for Instructional Activities Reproducible 4.2 CHAMPS Classroom Activity Worksheet (Sample A) ... Students will complete as much of assignment as possible during time given. If finished before time is up, readStudent Activity Workbook
The activity sheets are formative assessments of student progress and understanding. About this Lesson. Be sure that the 20 atom name cards are posted around the room. You will need the five cards on . the right hand side of each sheet. This lesson is intended as a follow-up to chapter 4, lesson 2.Dave Ramsey Chapter 4 Student Activity Sheet Answers
On a sheet of paper, make a list of the physical activities in which you ... activity among U.S. high school students. More than one in three teens (35 percent) do not participate regularly in vigorous physical activity (that is, for at least 20 ... 78 Chapter 4 Physical Activity for Life The number of obese adult Americans doubledEnergy Levels, Electrons, and Ionic Bonding | Chapter 4 ...
Chapter 4 Student Activity Sheet Answers Chapter 4 Student Activity Sheet Thank you definitely much for downloading Chapter 4 Student Activity Sheet Answers.Maybe you have knowledge that, people have look numerous times for their favorite books later this Chapter 4 Student Activity Sheet Answers, but end taking place in harmful downloads.Charlotte's Web - Super Teacher Worksheets
CHAPTER 4 STUDENT ACTIVITY SHEET True Cost of Ownership How much value do you lose on the purchase of a new automobile? Is it the same for each one? Find out how much you will lose in a five-year cycle. Choose a new car to research.Chapter Activities - foundationsu.com
Learn worksheet 2 chapter 4 with free interactive flashcards. Choose from 500 different sets of worksheet 2 chapter 4 flashcards on Quizlet.Personal Finance: Assignments Chapters 1, 2, 3, and 4
Chapter 4, Lesson 4 Activity Sheet Answers 1. The electron from each hydrogen atom feels an attraction from the proton in the other atom. The attractions bring the two hydrogen atoms together and the electrons are shared by both atoms making a covalent bond. 2. There has to be strong enough attraction by the protons in each atom for the electrons in the other atom.Student Workbook Answer Key
Learn dave ramsey chapter 4 with free interactive flashcards. Choose from 500 different sets of dave ramsey chapter 4 flashcards on Quizlet.Chapter Four Student Activity Sheet The Debt Snowball
Procedure This activity should be completed after watching Chapter 4 Video 2.2, which includes the “Drive Free, Retire Rich” video. Hand out the student activity sheet. Instruct students to shop around at local car lots and/or online ads to find a great car to buy.Chapter 4 global Analysis Flashcards | Quizlet
through exercises and activities related to The Pearl by John Steinbeck. It includes eighteen lessons, supported by extra resource materials. The introductory lesson introduces students to one main theme of the novel through a bulletin board activity. Following the introductory activity, students are given a transition to explain how theBunnicula: Novel Literacy Unit
Dictionary / Vocabulary activity for use with Chapter 4 of The Great Gatsby. Although students see this as a fun activity, it is an active learning task. It helps students with the difficult vocabulary, in terms of both spelling and meaning. There are 29 words/clues in the puzzle. Words are.. amoChapter Four Student Activity Sheet The Debt Snowball
Lord of the Flies Activities. Lord of the Flies Prereading Group Activity – Students get into small groups and pretend that they are trapped on an island without adults. They answer a series of questions and find either unity or dissension amongst their tribe. Students should complete this activity before reading Lord of the Flies. Lord of the Flies Prereading Activity | RTFChapter 4 The Law of Torts Flashcards | Quizlet
Chapter 5 – Answer Key – Worksheets Face Sheet, Patient Assessment & Reassessment, History, Physical Examination, Admission/Discharge Record Admission/Discharge Record 1. “Face Sheet” is also known as: Clinical, Demographic, and Financial 2. The face sheet contains three types of information. Name them.Parts of a Computer Worksheets
353 CHAPTER-BY-CHAPTER ANSWER KEY CHAPTER 1 ANSWERS FOR THE MULTIPLE CHOICE QUESTIONS 1. b The sociological perspective is an approach to understanding human behavior by placing it within its broader social context. (4) 2. d Sociologists consider occupation, income, education, gender, age, and race as dimensions of social location.(4)THE OUTSIDERS - Winston-Salem/Forsyth County Schools
Foundations In Personal Finance Chapter 4 Student Activity Sheet Answers Yeah, reviewing a books foundations in personal finance chapter 4 student activity sheet answers could mount up your close links listings. This is just one of the solutions for you to be successful. As understood, ability does not recommend that you have astounding points.Student Book Answer Key - AzarGrammar.com
the debt snowball chapter 4 student activity sheet answer key.pdf FREE PDF DOWNLOAD NOW!!! Source #2: the debt snowball chapter 4 student activity sheet answer key.pdfFifth Grade Lessons - American Chemical Society
Find biology chapter 4 lesson plans and teaching resources. Quickly find that inspire student learning. ... This six-page activity queries learners thoroughly about gland structure and hormone function. ... In this population worksheet, students will compare two population growth graphs and complete four short answer questions. Then students ...Of Mice and Men Activities - Ms. Strazzulla's English Classes
ECONOMICS TODAY AND TOMORROW ... Student Workbook. TO THE STUDENT The Reading Essentials and Study Guideis designed to help you use recognized reading strategies to improve your reading-for-information skills. For each section of the student text- ... Chapter 4 Going Into DebtPhantom Tollbooth - Super Teacher Worksheets
This page has worksheets and activities to use with Patrick Skene Catling's novel, The Chocolate Touch. This page has reading comprehension questions, vocabulary worksheets, puzzles, and vocabulary cards. ... After reading Chapters 3 & 4 students will respond to comprehension questions about John's new power.The Pearl Listening Task - Chapter 4 (Audiobook with timed Questions - Student Task Sheet)
Foundations in Personal Finance: High School Edition for Homeschool is designed as a complete curriculum, saving you time and equipping you with everything you need for a dynamic learning experience. The curriculum includes a student text, teacher resources, and lessons delivered via video by our Foundations team.The_Debt_Snowball .pdf - N A ME DAT E The Debt Snowball ...
[DOC] Chapter 4 Student Activity Sheet Hidden Costs Of Credit Chapter 4 Student Activity Sheet The Debt Snowball Answers is available in our digital library an online access to it is set as public so you can download it instantly Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of ...
Chapter 4 Student Activity Sheet
The most popular ebook you must read is Chapter 4 Student Activity Sheet. I am sure you will love the Chapter 4 Student Activity Sheet. You can download it to your laptop through easy steps. |
The supply of safe medicines ensures public health. The government has to take measures to for the prevention of falsified medicines. The consequences of falsified and substandard medicines are very dangerous as it leads to adverse effects. Falsified medicines are authorized medicines but unable to meet the standards of quality. All countries, especially low and middle-income countries are affecting by falsified medicines.
A strong relationship exists between drug failure and health as these are inverse proportional to each other. Falsified medicine will cause illness, mortality, early death, poisoning, untreatable disease, and treatment failure.
Various type of shady ingredients exists in substandard and falsified medicines that cause poisoning in the human body. Common types of shady ingredients are mercury, lead, cadmium, arsenic, chrome, uranium, strontium, selenium, aluminum, and boric acid. These ingredients in medicines are a threat to health due to the occurrence of the poisoning.
According to a report from the World Health Organization (WHO), falsified medicines increase the antimicrobial resistance among the people, who can pass on the mutant infection while individuals traveling abroad. Later on, such virus resistant and bacteria become impossible to treat. Not only this, but falsified medicines also cause untreated diseases, early death and treatment failure.
Moreover, the International Council of Nurses has to address the problem of falsified products for the sake of public health. Falsified products are not only a threat to health but also decrease the trust of public confidence in health systems and healthcare professionals.
Nurses working in the area of healthcare departments have the experience to identify the falsified medicines to generate a report for the improvement of the national healthcare systems. Hence, falsified medicines can be removed from the healthcare institutes based on the generated report of falsified and substandard medicines.
The usage of falsified medicines should be prevented by providing public awareness and education. The threat to health can be prevented through effective campaigns to explore awareness among the general public.
Falsified medicines contain dangerous ingredients in the toxic form so strong national regulatory frameworks are necessary for safe medicines. Also, these regulatory frameworks will fight against falsified products. As per the current research, only 20% of WHO member states have a well-developed and effective drug regulatory plan. Proper drug regulatory plan ensures safe medicine and prevents falsified medicines.
Production of substandard and falsified medicines is a global and complex issue. Worldwide, it has a great impact on the health of adults and children in developed as well as underdeveloped countries. Prevention of falsified medicines is necessary and precautionary measures should be taken at the national and international levels to curb this practice. Also, the supply chain management of drugs should be handled with care. Additionally, the supply of falsified medicines through e-commerce should be controlled and regularized for public health and safety. |
There is plenty of research on this topic. The Use of Music for Learning Languages: A Review (Stansell) asserts that (emphasis mine):
The researchers in this literature review show conclusively that music and language should
be studied together. Music‟s success is due, in part, to primal human abilities. Music codes
words with heavy emotional and contextual flags, evoking a realistic, meaningful, and cogent
environment, and enabling students to have positive attitudes, self-perceptions, and cultural
appreciation so they can actively process new stimuli and infer the rules of language. The
universal element of music can make the artificial classroom environment into a “real”
experience and make new information meaningful, bringing interest and order to a classroom.
One area to focus upon would be the use of music for instruction in grammar. Whereas it
takes little preparation to utilize songs for active class involvement, phrase and vocabulary
acquisition, cultural appreciation, and pronunciation, grammar is seldom considered an issue that
music can benefit.
This author has developed a new curriculum for teaching the Czech language, which has
students learning simple sentences with books of family pictures, singing five-part canons with
grammar concepts embedded in them, chanting the pronoun endings of prepositional phrases,rhythmically moving, listening to different instruments, listening and reading, and having
dialogue with native speakers. This system, the Phrase-Exemplar-based Multisensory Method
(PEBMSM) has been used by language trainers, but is primarily intended to be a demonstration
of the possible uses of music in a language learning context.
Why Use Music in English Language Learning? A Survey of the
Literature (Engh) provides that (emphasis mine):
The use of rhythm and rhyme to assist auditory recall has also been studied, and the multimodal combination of
rhythm, melody and rhyme along with linguistic prosody appears to lead to greater retention (Graham, 1992;
Palmer & Kelly, 1992).
Murphey (1989) provides potential evidence regarding why music effectively assists in lexical and phrasal recall
in noting the resemblance of songs to conversational discourse and suggests they are linguistically processed in a
Music in the language classroom may also be utilized with
an explicit vocabulary and grammar focus (Richards, 1969; Saricoban & Metin, 2000) and used to reinforce
either grammar or pronunciation points (Allen & Vallette, 1977). Pronunciation and phonology are a natural use
of songs in the aid of second language acquisition (Schön et al, 2008), and Leith (1979) states:
…there is probably not a better nor quicker way to teach phonetics than with songs. Phonetics instruction is one
good use to which songs can be put even in beginning classes (540).
The repetitive nature of songs makes them effective use for pronunciation drills (Bartle, 1962; Techmeier, 1969;
Shaw, 1970) and lastly, it is argued that songs contextually introduce supra-segmental features (Lems, 2001;
Wong and Perrachione, 2006), which aids in the learning of patterns for word identification.
Overall, the results are clear in suggesting use of music and song in the language-learning classroom is both
supported theoretically by practicing teachers and grounded in the empirical literature as a benefit to increase
linguistic, sociocultural and communicative competencies. From an educational standpoint, music and language
not only can, but should be studied together. |
Heartworm Dogs & Cats
Heartworm is a parasite that most dog owners and many cat owners have to be concerned about.
Heartworm disease is a serious disease that results in severe lung disease, heart failure, other organ damage, and death in pets, mainly dogs, cats, and ferrets. It is caused by a parasitic worm called Dirofilaria immitis.The worms are spread through the bite of a mosquito. The dog is the definitive host, meaning that the worms mature into adults, mate, and produce offspring while living inside a dog. The mosquito is the intermediate host, meaning that the worms live inside a mosquito for a short transition period in order to become infective (able to cause heartworm disease). The worms are called “heartworms” because the adults live in the heart, lungs, and associated blood vessels of an infected animal.
In the United States, heartworm disease is most common along the Atlantic and Gulf coasts from the Gulf of Mexico to New Jersey and along the Mississippi River and its major tributaries, but it has been reported in dogs in all 50 states.
In an infected dog, adult female heartworms release their offspring, called microfilariae, into the dog’s bloodstream. When a mosquito bites the infected dog, the mosquito becomes infected with the microfilariae. Over the next 10 to 14 days and under the right environmental conditions, the microfilariae become infective larvae while living inside the mosquito. Microfilariae cannot become infective larvae without first passing through a mosquito. When the infected mosquito bites another dog, the mosquito spreads the infective larvae to the dog through the bite wound. In the newly infected dog, it takes between six and seven months for the infective larvae to mature into adult heartworms. The adult heartworms mate and the females release their offspring into the dog’s bloodstream, completing the lifecycle |
Introduction to electron probability distribution:
The probability distribution is the major part in the probability theory and the statistics. The probability distribution is used to determine the number of possibility for the occurrence of an event. The most commonly used probability distributions are the binomial distribution, geometric distribution, normal distribution and the gamma distribution. These above mentioned distributions are included in the discrete and continuous probability distribution. The major type of the probability distribution is the discrete probability distribution and the continuous probability distribution. This article has the study about electron probability distribution.
Types of Electron Probability Distribution:
The major types of the probability distribution are
Discrete probability distribution
Continuous probability distribution
Discrete probability distribution:
The probability for a countable number of occurrences for the event is calculated in the discrete probability distribution.
Continuous probability distribution:
The probability values in this are the continuous ranged value it is calculated in the continuous probability distribution.
Examples for Electron Probability Distribution:
Example 1 to electron probability distribution:
A manufacturer of cotton pins knows that 7 % of his product is defective. If he sells pins in boxes of 100 and guarantees that not more than 4 pins will be defective. Determine the probability for a box will fail to meet the guaranteed quality.
The value of p is p = ‘7/100’ , n = 100
The mean value is ‘lambda’ = n p = (‘7/100’ ) (100) = 7
By the Poisson distribution
P[X = x] = ‘(e^ – lambda lambda^x)/ (x!)’
Probability for a box will to meet the guaranteed quality = P[X > 4]
P[X > 4] = 1- P[X = 4]
P[X > 4] = 1- (P (0) +P (1) +P (2) +P (3) +P (4))
P[X > 4] = 1- ‘e^-7′ (1 + 7 + ’49/2’ + ‘343/6’ + ‘2401/ 24’ )
P[X > 4] = 1- ‘e^-7’ (1+ 7 + 24.5 + 57.17 + 100.04)
P[X > 4] = 1- ‘e^-7’ (189.71)
P[X > 4] = 1- 0.00091(189.71)
P[X > 4] = 1- 0.1726
P[X > 4] = 0.8274
The probability for the box will fail to meet the guaranteed quality is 0.8274.
Example 2 to electron probability distribution:
The probability for destroying the target in only one time is 0.40. Compute the probability that it would be destroyed on the third attempt itself.
The probability of destroying the target in one trial is p = 0.40
The value of the q is calculated by q = 1-p
q = 1- 0.40
q = 0.60
By the geometric distribution, the probability for the success is calculated by using the formula
P(X =x) = q x p, the value of x is 0, 1, 2. . .
The target is destroyed at the third attempt, so x = 3.
P(X = 3) = (0. 60) 3 (0.40)
P(X = 3) = (0.216) (0.40)
P(X = 3) = 0.0864
The probability for destroying the target at the third trial is 0.0864. |
Watch this quick video to know more about 'Property Rights.'
Property rights are established to grant the legal right to use, lease, sell or transfer property. The right to property is seen as the fundamental tenet of the market economy. Property rights were always controversial. Though property rights are rarely abolished in any society, for much of human history and even today, people do not have an absolute right to property. But, there is growing agreement among legal philosophers and economists that property rights are human rights. Organizations like the World Bank and the the Heritage Foundation rank countries according to the the extent to which countries protect private property and the extent to which they enforce such laws. Countries which have the strongest property rights are almost always the most prosperous ones. According to the original constitution of India, right to property was a fundamental right. But, many provisions were added to this over decades, and now people have absolute right to their property, except when they are deprived of it by the authority of law. In the recent past, there were unsuccessful attempts to make property a fundamental right. For instance, in 2010, the Supreme Court rejected a PIL to re-institute the right to property as a fundamental right. |
Post-concussion syndrome is a complex disorder in which various symptoms — such as headaches and dizziness — last for weeks and sometimes months after the injury that caused the concussion.
Concussion is a mild traumatic brain injury that usually happens after a blow to the head. It can also occur with violent shaking and movement of the head or body. You don't have to lose consciousness to get a concussion or post-concussion syndrome. In fact, the risk of post-concussion syndrome doesn't appear to be associated with the severity of the initial injury.
In most people, symptoms occur within the first seven to 10 days and go away within three months. Sometimes, they can persist for a year or more.
The goal of treatment after concussion is to effectively manage your symptoms.
Post-concussion symptoms include:
- Loss of concentration and memory
- Ringing in the ears
- Blurry vision
- Noise and light sensitivity
- Rarely, decreases in taste and smell
Post-concussion headaches can vary and may feel like tension-type headaches or migraines. Most often, they are tension-type headaches. These may be associated with a neck injury that happened at the same time as the head injury.
When to see a doctor
See a doctor if you experience a head injury severe enough to cause confusion or amnesia — even if you never lost consciousness.
If a concussion occurs while you're playing a sport, don't go back in the game. Seek medical attention so that you don't risk worsening your injury.
Some experts believe post-concussion symptoms are caused by structural damage to the brain or disruption of the messaging system within the nerves, caused by the impact that caused the concussion.
Others believe post-concussion symptoms are related to psychological factors, especially since the most common symptoms — headache, dizziness and sleep problems — are similar to those often experienced by people diagnosed with depression, anxiety or post-traumatic stress disorder.
In many cases, both physiological effects of brain trauma and emotional reactions to these effects play a role in the development of symptoms.
Researchers haven't determined why some people who've had concussions develop persistent post-concussion symptoms while others do not. There's no proven connection between the severity of the injury and the likelihood of developing persistent post-concussion symptoms.
However, some research shows that certain factors are more common in people who develop post-concussion syndrome compared with those who don't develop the syndrome. These factors include a history of depression, anxiety, post-traumatic stress disorder, significant life stressors, a poor social support system and lack of coping skills.
More research is still needed to better understand how and why post-concussion syndrome happens after some injuries and not others.
Risk factors for developing post-concussion syndrome include:
- Age. Studies have found increasing age to be a risk factor for post-concussion syndrome.
- Sex. Women are more likely to be diagnosed with post-concussion syndrome, but this may be because women are generally more likely to seek medical care.
The only known way to prevent post-concussion syndrome is to avoid the head injury in the first place.
Avoiding head injuries
Although you can't prepare for every potential situation, here are some tips for avoiding common causes of head injuries:
- Fasten your seat belt whenever you're traveling in a car, and be sure children are in age-appropriate safety seats. Children under 13 are safest riding in the back seat, especially if your car has air bags.
- Use helmets whenever you or your children are bicycling, roller-skating, in-line skating, ice-skating, skiing, snowboarding, playing football, batting or running the bases in softball or baseball, skateboarding, or horseback riding. Wear a helmet when riding a motorcycle.
- Take action at home to prevent falls, such as removing small area rugs, improving lighting and installing handrails. |
Findings and Solutions in the Living Planet Report 2012
The WWF’s Living Planet Report (LPR) is the world's leading science-based analysis on the health of the Earth and the impact of human activity. The ninth biennial publication released in May, reviews the cumulative pressures humans are putting on the planet and the consequent decline in the health of the forests, rivers and oceans. Its key finding is that humanity's demands are exceeding the planet's capacity to sustain us.
The report concludes that biodiversity has declined globally by 28 percent between 1970 and 2008. In the tropics, the situation is more than twice as bad, with a decline of biodiversity of around 60 percent. The loss of biodiversity has many critical impacts including reduced carbon storage capacity, less freshwater and diminished fisheries.
Since 1996, the demand on natural resources has doubled. We currently use the equivalent of 1.5 planets to support human activities and in a business as usual scenario, by 2030, it is estimated that we will need two Earths to support human activity.
Wealthy countries are largely to blame for the state of the planet as they have a footprint which is five times greater than low income countries. Even though richer regions have a much larger environmental impact than low income areas, the poor suffer disproportionately from declining biodiversity.
Ecological footprint and biocapacity
The biocapacity of the Earth and the ecological footprint can be expressed in a common unit called a global hectare (gha). Gha measures the average productivity of all biologically productive areas (measured in hectares) on Earth in a given year. Measures of gha have been steadily declining.
Both population and the average per capita footprint have increased since 1961. The available biocapacity per person has nearly halved in the same time. Since the 1970s, humanity’s annual demand on the natural world has exceeded what the Earth can renew each year.
The Ecological Footprint tracks humanity’s demands on the biosphere. Over the last several decades, there is clear evidence of a consistent trend of over-consumption. At current levels, it takes 1.5 years for the Earth to fully regenerate the renewable resources that people are using in a single year. If everyone lived like an average resident of the U.S., a total of four Earths would be required to regenerate humanity’s annual demand on nature.
The per capita Ecological Footprint of high-income nations dwarfs that of low- and middle-income countries. The Living Planet Index for high-income countries shows an increase of 7 per cent between 1970 and 2008, while the index for low-income countries has declined by 60 per cent during the same time frame.
The growing levels of greenhouse gases will only worsen as there are less forests and other vegetation to absorb the rising levels of greenhouse gases. This will lead to faster rates of climate change and ocean acidification. These impacts will further erode biodiversity and undermine the resources on which people depend for their survival.
The amount of forest land needed to sequester carbon emissions is the largest component of the Ecological Footprint (55 percent). The carbon storage service provided by the world’s forests is vital for climate stabilization. The rapid degeneration of tropical forests is particularly destructive given that tropical forests store the most carbon. Almost half of this above-ground carbon is in the forests of Latin America, with 26 per cent in Asia, and 25 per cent in Africa.
Forests are being cleared and degraded through human activities, this not only deprives us of a carbon sink, it actually releases greenhouse gases, especially CO2, into the atmosphere. Globally, around 13 million ha of forest were lost each year between 2000 and 2010. Deforestation and forest degradation currently account for up to 20 per cent of global anthropogenic CO2 emissions – the third-largest source after coal and oil. This makes forest conservation a vital strategy in global efforts to drastically cut greenhouse gas emissions.
Freshwater ecosystems occupy approximately 1 percent of the Earth’s surface yet are home to around 10 per cent of all known animal species. Rivers provide services that are vital to the health and stability of human communities, including pollution control, fisheries, water, navigation, trade, detoxification and hydrological flow. But numerous pressures, including land use change, water use, infrastructure development, pollution and global climate change combine to erode the health of rivers and lakes around the world.
The rapid development of water management infrastructure – such as dams, dykes, levees and diversion channels – have left very few large rivers entirely free-flowing. Of the approximately 177 rivers greater than 1,000km in length, only around a third remain free-flowing. Free flowing rivers sustain a wealth of natural processes such as sediment transport, nutrient delivery, migratory connectivity and flood storage.
Oceans and fisheries
The world’s oceans supply fish and other seafood that form a major source of protein for billions of people. Oceans also provide seaweed and marine plants used for the manufacture of food, chemicals, energy and construction materials. Marine habitats such as mangroves, coastal marshes and reefs form critical buffers against storms and tsunamis and store significant quantities of carbon. Some of these habitats, especially coral reefs, support important tourism industries. Further, ocean waves, winds and currents offer considerable potential for creating renewable energy supplies. Oceans offer a wide range of benefits from food production to property protection; however they are being threatened by overexploitation, greenhouse gas emissions and pollution.
Over the past 100 years, the use of our oceans and the services they provide has intensified. We are currently engaged in unsustainable fishing and aquaculture and we are exploiting offshore oil and gas as well as engaging in seabed mining.
Perhaps the most dramatic visible impacts have been on the world's fisheries. A nearly five-fold increase in global catch, from 19 million tonnes in 1950 to 87 million tonnes in 2005, has left many fisheries on the verge of collapse. Overfishing has radically reduced the numbers of large predatory fish like marlin, tuna and billfish. What we have done to the top of the marine food chain has had repercussions all the way down to the bottom. The near extirpation of top predators has significantly increased the numbers of smaller marine animals, which has in turn led to a reduction of algae and undermined coral health.
One planet perspective solutions
The report indicates that there is still time to reverse current troubling trends. It starts with making better choices that place the natural world at the center of our economies. This translates to eco-conscious business models and environmentally sensitive lifestyles. As stated in the LPR, "nature is the basis of our well-being and our prosperity.” Therefore, to preserve the foundation of all human economies, ecosystem services must be preserved and where necessary, restored.
In order to reverse the declining Living Planet Index, the collective ecological footprint must be brought within the Earth’s carrying capacity. The only way we can do this is if we produce more with less and consume better and wiser.
The report concludes that we can create a prosperous future that provides food, water and energy for the 10 billion people that are expected by 2050. But to avoid dangerous climate change and achieve sustainable development we need to embed a fundamental reality into the way we think about our economies, business models and lifestyles: All our endeavors must pay heed to the fact that the Earth’s natural capital is finite.
We must reduce greenhouse gases and decouple human development from unsustainable consumption (moving away from material and energy-intensive commodities).
High income countries must conserve healthy rivers, lakes and wetlands to ensure that poorer countries have access to water. We must also use water more efficiently including smarter irrigation techniques and better resource planning.
We must strive to meet all of our energy needs from clean and abundant sources like the wind and the sun. But the first imperative is to cut our energy usage in half through greater efficiency in our buildings, cars and factories.
Other solutions proposed by the LPR include better waste reduction, better seeds and better cultivation techniques. The report also advocates restoring degraded lands and changing diets (lowering meat consumption).
- Preserve the Earth’s biodiversity and restore key ecological processes necessary for food, water and energy security, as well as climate change resilience and adaptation.
- Make production systems that lower humanity’s ecological footprint to within the limits of the Earth’s carrying capacity. This entails significant reductions in the use of land, water, energy and other natural resources.
- Make global consumption patterns conform to the Earth’s biocapacity.
- Stop the short term profit focused thinking that leads to over-exploitation of resources and to the destruction of ecosystems. See the very significant long-term benefits of protecting natural capital.
- Establish equitable resource governance to shrink and share our resource use so that it conforms to the Earth’s regenerative capacity. In addition, sustainable efforts should be made to promote health and education, as well as improve access to food, water and energy. Finally, we need a new definition of well-being and success that includes personal, societal and environmental health.
Richard Matthews is a consultant, eco-entrepreneur, green investor and author of numerous articles on sustainable positioning, eco-economics and enviro-politics. He is the owner of THE GREEN MARKET, a leading sustainable business blog and one of the Web’s most comprehensive resources on the business of the environment. Find The Green Market on Facebook and follow The Green Market’s twitter feed. |
Raz-Plus resources organized into weekly content-based units and differentiated instruction options.
Realistic (fiction), 220 words, Level H (Grade 1), Lexile 430L
Nami has a problem: She must make the perfect gift for each person in her family. It is her family tradition. She cannot think of the perfect gift for Aunt Hoshi. What will she make for her aunt? Nami's Gifts provides the opportunity to introduce the story elements of problem and solution to emergent readers. Pictures support the text.
Guided Reading Lesson
Use of vocabulary lessons requires a subscription to VocabularyA-Z.com.
Use the reading strategy of retelling to understand and remember story events
Problem and Solution : Identify problem and solution
Final Blends : Discriminate final consonant blend /st/
Consonant Blends : Identify final consonant blend st
Grammar and Mechanics
Past-Tense Verbs : Identify and use past-tense verbs
High-Frequency Words : Recognize and write the high-frequency word them
Think, Collaborate, Discuss
Promote higher-order thinking for small groups or whole class
You may unsubscribe at any time. |
Argentina is a nation in South America. In 1816, the United Provinces of the Rio Plata declared their independence from Spain. After Bolivia, Paraguay, and Uruguay went their separate ways, the area that remained became Argentina. The country's population and culture were heavily shaped by immigrants from throughout Europe, but most particularly Italy and Spain, which provided the largest percentage of newcomers from 1860 to 1930. Up until about the mid-20th century, much of Argentina's history was dominated by periods of internal political conflict between Federalists and Unitarians and between civilian and military factions. After World War II, an era of Peronist populism and direct and indirect military interference in subsequent governments was followed by a military junta that took power in 1976. Democracy returned in 1983 after a failed bid to seize the Falkland (Malvinas) Islands by force, and has persisted despite numerous challenges, the most formidable of which was a severe economic crisis in 2001-02 that led to violent public protests and the successive resignations of several presidents.
- ↑ The CIA World Factbook |
Ancient Greece was where democracy was born, theater flourished and some of the greatest philosophical minds ever pondered the meaning of existence. But it was also a period of savage warfare and fierce brutality. Greek infantrymen were called hoplites. They marched in organized rows called phalanxes with their large shields held in front, forming an impenetrable wall as they advanced. The five most common arms Greeks carried were devastatingly effective.
Long Spears for Thrusting
Doru were spears between 7 and 9 feet long that were carried vertically in the right hand and used for thrusting. The long wooden pole was topped by an iron leaf-shaped point called an aichme. On the opposite end was the sauroter, a bronze spike. The sauroter was a clever addition to the doru as it could be used several ways – as an additional weapon, as a counterweight to make the spear easier to manage and as a tip on which to plant the weapon into the ground so it remained upright if the solider tired of carrying it.
Xiphos were straight double-edged swords. They were secondary weapons that were short in length and made of iron. Xiphos were useful when hoplites lost or broke their doru in the thick of battle or when they fought in close combat.
Kopis, also called Machaira, were an alternative to xiphos. They were comparable to xiphos in size and length -- a little over 2 feet -- but wider and with a gently curved cutting edge, similar to a large butcher knife. They were also made of iron and quite handy, as soldiers could also use them for cutting meat.
Javelins for Throwing
Akontia, now known as javelins, are one of the few Greek weapons that have stayed with us in the modern age. The soldiers who threw Akontia were known as akonsistai. The akonsistai made up the largest segment of the peltasts, specialized mobile troops who carried missile weapons. The lightly armored peltasts roved the battlefield, often protecting the vulnerable left and right flanks of the hoplite forces during forward surges.
The sling was another missile weapon carried by Peltast troops. They were made from leather and strung with animal sinew. The stones could be rocks, balls of clay, or lead bullets. The bullets themselves were sometimes inscribed with the names of Greek cities or playful messages for the enemy like ‘Ouch!’ or ‘Pay attention’.
- Photos.com/Photos.com/Getty Images |
You have selected free tutorial of the Microsoft Corporation for the Microsoft Office Specialist (MOS) :
77-418: Word 2013 Core Topics : Apply references :
Create endnotes, footnotes, and citations •Inserting endnotes, managing footnote locations, configuring endnote formats, modifying footnote numbering, inserting citation placeholders, inserting citations, inserting bibliography, changing citation styles
When creating formal research papers, students are expected to create documents that meet formal standards. There are many documentation standards in today’s business and academic world. Some of the standards are universal; some are specific to a particular organization. Footnotes and endnotes typically convey additional information about statements in the body of a document, or they describe the sources of quoted material. Each footnote or endnote is automatically numbered. The number, called a reference mark, appears as a superscript in the body of the document; the same number appears at the beginning of the footnote or endnote text as an identifier.
The reference mark is a hyperlink to the location of the corresponding note, and the number in the footnote or endnote text is a hyperlink to the corresponding reference mark. You typically see footnotes and endnotes being used in scholarly papers which is following Microsoft Word reference tools to do the job:
- Footnotes and endnotes You can provide supporting information without interrupting the flow of the primary content by inserting the information in footnotes at the bottom of the relevant pages or endnotes at the end of the document.
- Table of contents You can provide an overview of the information contained in a document and help readers locate topics by compiling a table of contents that includes page numbers or hyperlinks to each heading.
- Index You can help readers locate specific information by inserting index entry fields within a document and compiling an index of keywords and concepts that directs the reader to the corresponding page numbers.
- Information sources and a bibliography You can appropriately attribute information to its source by inserting citations into a document. Word will then compile a professional bibliography from the citations.
When you want to make a comment about a statement in a document, you can enter the comment as a footnote or an endnote. Doing so inserts a number or symbol called a reference mark. By default, footnote reference marks use the 1, 2, 3 number format, and endnote reference marks use the i, ii, iii number format.
managing footnote locations
configuring endnote formats modifying footnote numbering inserting citation placeholders inserting citations inserting bibliography changing citation styles
Your Salary Above $ 66000... Click ...
Ohh! You want More.... be game developer of your choice $ 102000 .... |
For a lot of folks, greenhouses are miracles of nature, enabling dramatic and constant plant growth in a controlled internal environment when outdoor temperatures don’t even approach minimal problems. The science of the way these structures produce heat and promote plant growth is not a puzzle, and greenhouses showcase some basic procedures that explain how solar heating and plant development operate.
Passage of the sunlight’s warmth through steel or glass panels is vital to greenhouse operation. Most wavelengths of solar radiation, except long, thermal infrared waves, pass through the transparent panes, and convert in solar radiation into heat. Everything in greenhouses, including soil and plants, absorbs radiation, contributing to heat production. Growing plants and other things inside the greenhouse convert solar radiation to longer wavelengths that cannot escape through the transparent panes, thereby heat the greenhouse even farther. Since hot air is trapped, temperatures keep rising throughout the afternoon, also causing water to evaporate and creating high humidity that aids plant growth.
Place the Ground in the south or southwest side of a structure in order that it receives the best quantity of sunlight during the afternoon. Many hours of sunlight are essential for optimizing the conversion of solar energy to thermal energy. Do not put a greenhouse in a shady place or adjacent to constructions that will block sunlight for the majority of the day.
Heat and heat
When air begins to cool at night, water vapor condenses inside the greenhouse roof and walls. Well-constructed greenhouses have vents or fans to pull out excess moisture and heat in the interior. Fans also keep temperature even by circulating air, moving hot air upward towards roof vents and mixing it with cooler air that remains near the surface. Daytime heating might not be enough, nevertheless, during winter months when outside temperatures are low. At these times, greenhouses need heaters at night to maintain sufficient warmth.
All greenhouse components, including hardwood, water, dirt, bricks and flooring, absorb and release warmth to another extent. Metals like iron and aluminum heat up and lose heat fast, but wood, water and soil absorb and release heat slowly. Because of these different prices, greenhouse design is important in achieving optimal plant growth. Efficient design is particularly important at night when the thermal mass, or saved heat, dissipates. Components that absorb and release heat slowly are vital for maintaining continuous temperature at night.
Although greenhouse plants possess some moisture during water vapor, the water present in ambient atmosphere is insufficient to maintain a correct greenhouse atmosphere. Larger greenhouses have automatic watering systems to maintain soil moisture. Homeowners can either install an automatic system or water manually. |
What Is Inflation?
Inflation refers to how the price for goods and services increases over time. For example, a loaf of bread that would have cost $2 10 years ago might cost $4 today. Even though you can't nail down exactly what inflation rates will be in the future, understanding how inflation works allows you to plan for your future financial goals.
Inflation measures the change in cost over a broad range of items, not just a one or two, because it measures how the cost of living changes for an average person or family. For example, inflation measures the cost of goods like food, clothing, energy and electronics, and services like haircuts, insurance and housing. During any given period, some of these items will go up in cost more than others; inflation refers to the average increase in price of such goods and services.
Because different types of goods see prices inflate at varying rates, inflation doesn't have the same impact on everyone. Instead, it affects people differently depending on what they purchase. For example, if food prices are going up rapidly, a family of four is likely to feel a bigger hit than someone living alone because it has more mouths to feed. Alternatively, if a single person has a long commute and gas prices rise drastically, that will have a bigger impact on his budget than someone who has a minimal commute or works from home.
Planning for Inflation
Inflation affects how you plan your future savings goals, such as buying a home or saving for retirement. For example, say you want to buy a home that costs $200,000 today in five years. If after five years, you have $200,000 in your savings account, you're likely to be short funds because the price of the home may have increased. If inflation amounts to 2 percent per year, you're going to need more than $220,000 in five years.
Relative Purchasing Power
The fact that goods and services are likely to cost more in the future than they do today means that your dollar likely won't buy as much in the future as it can right now. As a result, it's important for you to measure your net worth relative to what it can buy, rather than just as a raw number. For example, say you have $20,000 your retirement account today. If you simply let it set in an account paying 1.5 percent interest for 20 years, you'll have almost $27,000. However, if inflation is 2 percent, you won't be able to buy as much with that $27,000 in 20 years as you could with the $20,000 today. |
Rand Expectation and Individual Influence
1. Grand Expectation
The years from 1945 to 1974 were an era of Grand Expectations. At the end of World War II, spontaneous celebrations filled downtown streets across the nation. The war was over. Sons and daughters were coming home. Democracy triumphed over fascism. The Great Depression was a memory. Prosperity was ahead. It was time to purchase a home, start a family, and return to work. However, despite the optimism, new concerns were on the horizon. Select what you consider to be the most significant challenge of this era and discuss its historical significance. Make sure to incorporate what you learned from the Multi-Media podcast you selected this week. Include the name of the historian, his or her central argument, the title of the podcast, and how it relates to the topic of the week.
2. Individual Influence
Every era in American history has had notable men and women who shaped their times, for better or worse. Identify one person from this weekas reading who you believe had a significant impact on the years from 1945 to 1974. This could include a president who introduced important policies that changed the country, a social activist who inspired reform, or someone else who had an important positive or negative impact on this era. Briefly explain who this person was, their specific a?individual influencea? on the times, and then discuss why you made your selection. |
A linear congruential generator is a simple method often used to generate pseudorandom number sequences. The sequence starts with an arbitrary number (called a "seed"). Then to generate the next number in the sequence, multiply by a constant ("multiplier"), then add another constant ("increment").
In this problem, we will use a seed of 0, a multiplier of 134775813, and an increment of 1. Thus, the first few numbers in the sequence are:
- 0th: 0 (This is the seed. We'll call it the 0th number)
- 1st: 1 (0 × 134775813 + 1)
- 2nd: 134775814 (1 × 134775813 + 1)
When the number overflows, keep the lowest 32 bits (i.e., mod 232). This happens automatically without extra code.
Write a function that returns the nth number in the above sequence.
unsigned random (unsigned n);
Expected solution length: Around 10 lines.
Write your solution here |
The learning theories was put forward by a group of behaviourists. It states that we are blank sheets and that we come into the world not knowing anything. It also says that we learn all types of behaviours, including how to form attachments. Behaviour is learned either through classical or operant conditioning. We learn to form attachment through food. Classical conditioning is learning through association between something in the environment (stimulus) and physical reactions (response). In classical conditioning it proclaims that we learn passively and that the response is normally a reflex because it is automatic. Ivan Pavlov was the first person to describe this type of learning. He used his observation of salivating dogs. However, we can apply this to human attachment. Before learning, or conditioning occurs, the conditioned stimulus (UCS), food produces an innate reflex reaction, known as the unconditioned response (UCR), this being pleasure. The food and pleasure are both unconditioned because no learning has occurred at this stage of the learning process.
The infant’s mother, the neutral stimulus (NS) is present, food (UCS) will follow which will once again lead to an innate reflex reaction pleasure (UCR). After conditioning, the mother is no longer the (NS), she has become the conditioned stimulus (CS) who triggers the behavioural response, pleasure (CR). The mother becomes a way of gaining pleasure and therefore an attachment has been formed. Operant conditioning on the other hand is when we play an active part in our learning with the environment. We learn as a consequence of something we have already done such as nurture rewards and reinforcement strengthening a behaviour which increases our chances of that behaviour being repeated again. An example of operant conditioning is when a child has eaten all its food it is given a reward by its mother for eating all its food by hugging the child and giving them smiles and kisses. This will en courage the child to do it again in the future.
The infant is therefore learning from its past experiences. The other type of reinforcement is negative reinforcements. This is when something pleasant occurs in the result of escaping something unpleasant. Dollard and Miller (1950) did an experiment with infants that when they are hungry they want food to get rid of this discomfort. So the infants cry and their mother will come and feed them. This removes the discomfort. The infant is now comfortable but as a consequence of escaping from an unpleasant state (hunger). This piece of research supports the learning theories core assumptions as it reinforces the attachment between a mother (secondary reinforce) and the infant through the use of food. Harlow (1959) research shows that food is not everything and that comfort is more important than food.
He tested this statement by carrying an experiment using two surrogate mothers and a baby rhesus monkey. He locked him in a cage with the two mothers. One had a bottle of milk (food) and the second monkey didn’t have any milk but was covered in terrycloth. And when the monkey was scared he moved to the mother he felt the safest and this was the mother with terrycloth. We can conclude that that monkeys have an unlearned need for comfort which is as basic as the need for food. This shows that food isn’t everything and that the learning theories place too much emphasis on food. Another piece of evidence is Schaffer and Emerson (1964).
It was conducted using 60 Scottish infants and they had a follow up at four weekly intervals throughout the first year. Their mothers would report their behaviour to seven everyday situations e.g. separations, left alone in a room or with a baby sitter and how they responded when she came back. They found that the infants were still clearly attached to the people that weren’t carrying out any caretaking activities. E.g. feeding father.
Therefore we can draw a conclusion that the attachment figure of the infant was decided on how they respond to the infant’s behaviour and the total amount of stimulation they provided (e.g. talking and touching) not food. However food may not be the main reinforcer therefore this theory is a reductionist as it only focuses on the nurture side of nature vs. nurture debate. But learning theories focuses too much on nature as well and in order for theories to be considered more holistic they need to integrate nature and nurture both to together to provide a more detailed and precise explanation regarding human attachment formation. |
such as "Introduction", "Conclusion"..etc
MADISON, WI, APRIL 23, 2007 -- North Carolina State researchers recently discovered a test that quickly predicts nitrogen levels in the humid soil conditions of the southeastern United States. These scientists report that the Illinois Soil Nitrogen Test (ISNT) can assess the nitrogen levels in soil with more accuracy than current soil-based tests. This test will allow growers to cut back on the amount of nitrogen-based fertilizer added to soil, leading to economic and environmental benefits.
The proper management of nitrogen is critical to the success of many crop systems. Based on an assessment of the natural amount of nitrogen in soil, growers calculate their optimum nitrogen rates, the concentration of nitrogen that must be present in fertilizer in order to achieve expected crop yields. Under- and over-applying nitrogen fertilizer to corn crops often leads to adverse economic consequences for corn producers. Excess levels of nitrogen in nature also pose serious threats to environment. Agricultural application of nitrogen has been linked to rising nitrate levels and subsequent death of fish in the Gulf of Mexico and North Carolina’s Neuse River.
"Although offsite nitrogen contamination of ground and surface waters could be reduced if nitrogen rates were adjusted based on actual field conditions, there is currently no effective soil nitrogen test for the humid southeastern U.S.," said Jared Williams, lead author of the North Carolina State study that was published in the March-April 2007 issue of the Soil Science Society of America Journal. This research was supported in part by USDA Initiative for Future Agricultural and Food Systems (IFAFS) grant.
From 2001 to 2004, scientists collected and tested the soil from 35 different sites in North Carolina. According to the North Carolina scientists, the collected soil samples were representative of millions of hectares in agricultural production in the southeastern USA. Corn was planted at each site with a range of nitrogen fertilizer rates, and the optimum nitrogen rates and the soil assay results were compared among the sites.
From the collected samples, researchers discovered that the Illinois Soil Nitrogen Test (ISNT) could be used to accurately measure the economic optimum nitrogen rates (EONR) of southeastern soils, despite moderate weather variation over the collection period. While the test can be used to predict the optimum nitrogen rates, the relationship between ISNT and EONR varied by soil drainage class. Researchers believe that these differences represent differences in organic matter that lead to less mineralization and/or more denitrification on poorly drained soils. The results indicate that the Illinois Soil Nitrogen Test can serve as a model for predicting economic optimum nitrogen rates on well- and poorly drained soils and show promise as a tool for nitrogen management.
"Additional research is needed to calibrate and validate the EONR versus ISNT relationships under a wider variety of conditions," says Williams. "Because the Illinois Soil Nitrogen Test predicted EONR robustly to different cost/price ratios, ISNT has the potential to modify or replace current nitrogen recommendation methods for corn."
American Society of Agronomy. April 2007.
Enter the code exactly as it appears. All letters are case insensitive. |
Struggle for Fair Housing
Despite Supreme Court decisions, including Shelley v. Kraemer (1948) and Jones v. Mayer Co. (decided in June 1968), barring the exclusion of African Americans or other minorities from certain sections of cities, race-based housing patterns were still in force by the late 1960s, and those who challenged them often met with resistance, hostility and even violence. Meanwhile, while a growing number of African-American or Hispanic members of the armed forces fought and died in Vietnam, on the home front their families had trouble renting or purchasing homes in certain residential areas because of their race or national origin. In this climate, organizations such as the National Association for the Advancement of Colored People (NAACP) the G.I. Forum and the National Committee Against Discrimination in Housing lobbied for new fair housing legislation to be passed.
The proposed civil rights legislation of 1968 expanded on and was intended as a follow-up to the historic Civil Rights Act of 1964. The bill’s original goal was to extend federal protection to civil rights workers, but it was eventually expanded to address racial discrimination in housing. Title VIII of the proposed Civil Rights Act was known as the Fair Housing Act, later used as a shorthand description for the entire bill. It prohibited discrimination concerning the sale, rental and financing of housing based on race, religion, national origin and sex.
Passage by the Senate and House
In the Senate debate over the proposed legislation, Senator Edward Brooke of Massachusetts–the first African-American ever to be elected to the Senate by popular vote–spoke personally of his return from World War II and his inability to provide a home of his choice for his new family because of his race. In early April 1968, the bill passed the Senate, albeit by an exceedingly slim margin, thanks to the support of the Senate Republican leader, Everett Dirksen, which defeated a southern filibuster. It then went to the House of Representatives, from which it was expected to emerge significantly weakened; the House had grown increasingly conservative as a result of urban unrest and the increasing strength and militancy of the Black Power movement.
On April 4–the day of the Senate vote–the civil rights leader Martin Luther King Jr. was assassinated in Memphis, Tennessee, where he had gone to aid striking sanitation workers. Amid a wave of emotion–including riots, burning and looting in more than 100 cities around the country–President Lyndon B. Johnson increased pressure on Congress to pass the new civil rights legislation. Since the summer of 1966, when King had participated in marches in Chicago calling for open housing in that city, he had been associated with the fight for fair housing. Johnson argued that the bill would be a fitting testament to the man and his legacy, and he wanted it passed prior to King’s funeral in Atlanta. After a strictly limited debate, the House passed the Fair Housing Act on April 10, and President Johnson signed it into law the following day.
Impact of the Fair Housing Act
Despite the historic nature of the Fair Housing Act, and its stature as the last major act of legislation of the civil rights movement, in practice housing remained segregated in many areas of the United States in the years that followed. From 1950 to 1980, the total black population in America’s urban centers increased from 6.1 million to 15.3 million. During this same time period, white Americans steadily moved out of the cities into the suburbs, taking many of the employment opportunities blacks needed into communities where they were not welcome to live. This trend led to the growth in urban America of ghettoes, or inner city communities with high minority populations that were plagued by high unemployment, crime and other social ills.
In 1988, Congress passed the Fair Housing Amendments Act, which expanded the law to prohibit discrimination in housing based on disability or on family status (pregnant women or the presence of children under 18). These amendments brought the enforcement of the Fair Housing Act even more squarely under the control of the U.S. Department of Housing and Urban Development (HUD), which sends complaints regarding housing discrimination to be investigated by its Office of Fair Housing and Equal Opportunity (FHEO). |
To provide a comprehensive analysis of status and trends of the world's agricultural biodiversity and of their underlying causes (including a focus on the goods and services agricultural biodiversity provides), as well of local knowledge of its management.
Processes for country-driven assessments are in place, or under development, for the crop and farm-animal genetic resources components. The assessments draw upon, and contribute to, comprehensive data and information systems. There is also much information about resources that provide the basis for agriculture (soil, water), and about land cover and use, climatic and agro-ecological zones. However, further assessments may be needed, for example, for microbial genetic resources, for the ecosystem services provided by agricultural biodiversity such as nutrient cycling, pest and disease regulation and pollination, and for social and economic aspects related to agricultural biodiversity. Assessments may also be needed for the interactions between agricultural practices, sustainable agriculture and the conservation and sustainable use of the components of biodiversity referred to in Annex I to the Convention. Understanding of the underlying causes of the loss of agricultural biodiversity is limited, as is understanding of the consequences of such loss for the functioning of agricultural ecosystems. Moreover, the assessments of the various components are conducted separately; there is no integrated assessment of agricultural biodiversity as a whole. There is also lack of widely accepted indicators of agricultural biodiversity. The further development and application of such indicators, as well as assessment methodologies, are necessary to allow an analysis of the status and trends of agricultural biodiversity and its various components and to facilitate the identification of biodiversity-friendly agricultural practices (see programme element 2).
1.1. Support the ongoing or planned assessments of different components of agricultural biodiversity, for example, the reports on the state of the world's plant genetic resources for food and agriculture, and the state of the world's animal genetic resources for food and agriculture, as well as other relevant reports and assessments by FAO
and other organizations, elaborated in a country-driven manner through consultative processes.
1.2. Promote and develop specific assessments of additional components of agricultural biodiversity that provide ecological services, drawing upon the outputs of programme element 2. This might include targeted assessments on priority areas (for example, loss of pollinators, pest management and nutrient cycling).
1.3. Carry out an assessment of the knowledge, innovations and practices of farmers and indigenous and local communities in sustaining agricultural biodiversity and agro-ecosystem services for and in support of food production and food security.
1.4. Promote and develop assessments of the interactions between agricultural practices and the conservation and sustainable use of the components of biodiversity referred to in Annex I to the Convention.
1.5. Develop methods and techniques for assessing and monitoring the status and trends of agricultural biodiversity and other components of biodiversity in agricultural ecosystems, including:
- Criteria and guidelines for developing indicators to facilitate monitoring and assessment of the status and trends of biodiversity in different production systems and environments, and the impacts of various practices, building wherever possible on existing work, in accordance with decision V/7, on the development of indicators on biological diversity, in accordance to the particular characteristics and needs of Parties;
- An agreed terminology and classification for agro-ecosystems and production systems to facilitate the comparison and synthesis of various assessments and monitoring of different components of biodiversity in agricultural ecosystems, at all levels and scales, between countries, and regional and international partner organizations;
- Data and information exchange on agricultural biodiversity (including available information on ex situ collections) in particular through the clearing-house mechanism under the Convention on Biological Diversity, building on existing networks, databases, and information systems;
- Methodology for analysis of the trends of agricultural biodiversity and its underlying causes, including socio-economic causes.
Ways and means
Exchange and use of experiences, information and findings from the assessments shall be facilitated by Parties, Governments and networks with consultation between countries and institutions, including use of existing networks.
Country-driven assessments of genetic resources of importance for food and agriculture (activity 1.1) shall be implemented, including through programmes of FAO and in close collaboration with other organizations, such as CGIAR
. Resources may need to be identified to support additional assessments (activity 1.2), which would draw upon elements of existing programmes of international organizations, and the outputs of programme element 2.
This programme element, particularly activity 1.5, will be supported through catalytic activities, building upon and bringing together existing programmes, in order assist Parties to develop agricultural biodiversity indicators, agreed terminology, etc., through, inter alia
, technical workshops, meetings and consultations, e-mail conferences, preparation of discussion papers, and travel. Funding of these catalytic activities would be through the Secretariat, with in-kind contributions from participating organizations.
Timing of expected outputs
A key set of standard questions and a menu of potential indicators of agricultural biodiversity that may be used by Parties at their national level, and agreed terminology of production environments by 2002.
Reports on the state of the world's genetic resources, as programmed, leading progressively towards a comprehensive assessment and understanding of agricultural biodiversity, with a focus on the goods and services it provides, by 2010. |
In this installment on the history of atom theory, physics professor (and my dad) Dean Zollman discusses how two separate teams an ocean apart proved Louis de Broglie right about matter sometimes behaving as waves rather than particles.—Kim
By Dean Zollman
In the last post, I discussed Louis de Broglie’s radical hypothesis that matter sometimes behaved as waves. De Broglie did not have any direct evidence for his theory. However, he was able to show how his ideas could be connected to Niels Bohr’s model of the atom. Once de Broglie introduced these ideas, both theoretical and experimental progress occurred rather quickly. In this post, I will discuss the experiments that were able to conclude that de Broglie was right. Next time, we will look at the major theoretical advance in the 1920s. (This approach is not quite chronological, but many things happened and influenced each other over a short time.)
As I have discussed previously, the fundamental property that distinguishes waves from particles is interference. When two particles meet, they bounce off each other. Two waves can reinforce each other (constructive interference), cancel each other (destructive interference), or have some result between these two extremes.
To observe interference experimentally, we need to arrange for waves to move along two different paths and then meet. The distance that each wave travels determines the type of interference. In one method, the wave goes through two openings simultaneously and then combines. In another, the waves reflect off two surfaces that are very close to each other.
This second method is more common in everyday life. When you look at a soap bubble or at a puddle with a thin layer of oil on it, you frequently can see colors, with each color at a different location. The colors are created by some light reflecting from the top of the oil and interfering with light that travels through the oil and reflects from the water. The colors appear because each color has a different wavelength and thus constructive interference occurs at different places. Light passing through opening is less common in everyday life. Although, you may have seen an interference effect by looking at light through a fine lace curtain.
To show that particles have wave properties, physicists needed to observe some type of interference with particles. Then, they would need to compare the results of the interference with de Broglie’s predictions. The most logical particle to start with was the smallest one known at that time—the electron.
The biggest difference when we consider electrons instead of light is the size of the objects and wavelengths. To cause interference, the sizes need to be approximately the same as the wavelength of the object doing the interfering. The wavelength of light is quite small but within the general range of the thickness of a soap bubble or an oil slick. One can also make openings for light to pass through by using a razor blade carefully. But using de Broglie’s result, we find that the wavelength of an electron is much smaller than that of visible light. It is similar to the wavelength of X-rays.
The solution to this problem is to use crystals. In a crystal, such as table salt, the atoms line up in nice, neat patterns. The layers of atoms create surface from which waves can reflect while the spaces between atoms can be used for opening through which waves can pass. The distances between atoms in the crystal are the sizes we need to observe interference.
Two research efforts—one in the U.S., the other in the U.K.—independently worked on this experiment. In his Nobel Prize lecture one of the experimenters, Clinton Davisson, described the two locations as “… in a large industrial laboratory in the midst of a great city, and in a small university laboratory overlooking a cold and desolate sea.” The experiment in the large industrial laboratory used the reflection method while the one near the cold and desolate sea used transmission through small openings. Both involved crystals.
The effort in the United States took place at Bell Labs. This laboratory was established in 1925 by the old version of AT&T. Its general mission was to conduct research to improve communications. However, a large amount fundamental research also took place. Over many years, many important scientific discoveries were made at Bell Labs, and some significant devices which are now part of our everyday life (and not just telephones) were invented there. The Wikipedia article on Bell Labs lists these achievements and the Nobel Laureates who worked there.
Clinton Davisson (1881–1958) began working for Western Electric, a division of AT&T, during World War I. After the war, he remained with the company and joined Bell Labs when it was founded. Lester Germer (1896–1971) worked for Western Electric for just a couple of months before entering military service in World War I. During the war, he was one of the first airplane pilots to see action on the front. After the war and a short rest, he was rehired by Western Electric and assigned to work with Davisson. They were assigned projects to look at some aspects of metals that were used in parts of the telephone company’s amplifiers. One aspect of this work involved shooting electrons at the metals. They were looking at how the electrons bounced off metals to better understand the metals’ properties. (Physicists use the phrase elastic scattering of electrons rather than bouncing off.) Wave behavior of electrons were not part of the research agenda. And for a while, they did not see any such behavior.
Davisson described how they became interested in looking at the details of the electron scattering. “Out of this grew an investigation of the distribution-in-angle of these elastically scattered electrons. And then chance again intervened; it was discovered, purely by accident, that the intensity of elastic scattering varies with the [angle].” This change in intensity with angle was much like the different colors of light appearing at different angles when it reflects from the surfaces on a bubble. The phenomenon implies an interference pattern.
As Davisson and Germer investigated the scattering of electrons from crystals, they completed 21 different experiments. For each experiment, they could accelerate the electrons to a speed that they knew. So they also knew the electron’s momentum. In turn, they could calculate the wavelength of the electron and determine at which angle the constructive interference should occur. They compared their results for angle of the maximum number of electrons with the angle predicted by using de Broglie’s wavelength. For 19 of the experiments, their results matched extremely well with the wavelength predicted by de Broglie.
At about the same time George P. Thomson (1892–1975), son of the J.J. Thomson, who discovered the electron, was conducting a similar investigation in Aberdeen, Scotland. His experiment involved transmitting electrons through a crystal. In his Nobel lecture, Thomson said, “A narrow beam of [electrons] was transmitted through a thin film of matter. In the earliest experiment of the late Mr. Reid, this film was of celluloid, in my own experiment of metal. In both, the thickness was of the order of 0.000001 cm. The scattered beam was received on a photographic plate … and when developed showed a pattern of rings … An interference phenomenon is at once suggested.”
The picture below shows the result of a modern version of Thomson’s experiment, one that is done by many college physics majors. The light and dark circle are locations where constructive and destructive interference of electrons (respectively) occur.
The results indicated that electrons had wave-like behaviors. As with the Davisson-Germer experiment, Thomson accelerated the electrons to a speed that he knew very well. So he could use de Broglie’s hypothesis to determine the wavelength. Using that wavelength, he predicted where the bright rings should occur in the photographs. The agreement between predictions and measurements were “within 1 percent.” G.P. Thomson had shown that indeed electrons behaved as waves.
These two experiments definitively established the wave behavior of matter. This behavior is at the foundation of most of contemporary physics and is critical to the development of many devices that today we take for granted. However, measuring the wave behavior was just one important step. Next time, we will look at the major theoretical advance that was underway at about the same time as these experiments.
Dean Zollman is university distinguished professor of physics at Kansas State University where he has been a faculty member for more than 40 years. During his career he has received four major awards — the American Association of Physics Teachers’ Oersted Medal (2014), the National Science Foundation Director’s Award for Distinguished Teacher Scholars (2004), the Carnegie Foundation for the Advancement of Teaching Doctoral University Professor of the Year (1996), and AAPT’s Robert A. Millikan Medal (1995). His present research concentrates on the teaching and learning of physics and on science teacher preparation. |
Copyright © University of Cambridge. All rights reserved.
'Big and Small Numbers in the Living World' printed from http://nrich.maths.org/
Biology makes good use of numbers both small and large. Try these questions involving big and small numbers. You might need to use standard biological data not given in the question. Of course, as these questions involve estimation there are no definitive 'correct' answers. Just try
to make your answers to each part as accurate as seems appropriate in the context of question.
- Estimate how many breaths you will take in a lifetime.
- Estimate the number of people who live within 1 km of your school.
- I look through a microscope and see a cell with a roughly circular cross section of 7 microns. Estimate the volume of the cell.
- The arctic tern migrates from the antarctic to the arctic. Estimate how far an arctic tern flies in its lifetime.
- Estimate the weight of dog food eaten in England each year.
- An amoeba doubles in size every 24 minutes. How long will a sample of size about 1mm by 1mm take to cover a petri dish? Do we need to worry too much about the initial size of the sample in this calculation?
- Estimate how many cells there are in your little finger
- Estimate the total weight of the pets of everyone in your class
- How much do you think your brain weighs? How much do you think this weight varies over the course of your life?
- If current trends continue what will the population of the UK be in ten years' time? Do you think it is reasonable to assume that they will continue? Why?
Extension: Try to improve your estimates by giving a range of numbers between which you know the true answer lies. |
Global mean temperatures have been flat for 15 years despite an increase of heat-trapping greenhouse gases and new research by scientists at Scripps Institution of Oceanography found that cooling in the eastern Pacific Ocean is behind the recent hiatus in global warming.
Partially funded by NOAA’s Climate Program Office, Scripps climate scientists Yu Kosaka and Shang-Ping Xie used innovative computer models to simulate regional patterns of climate anomalies, enabling them to see global warming in greater spatial detail. The models revealed where warming was most intense and where on the planet there had been either less warming or even cooling trends.
“Climate models consider anthropogenic forcings like greenhouse gases and tiny atmospheric particles known as aerosols, but they cannot study a specific climate event like the current hiatus,” Kosaka said in prepared remarks. “We devised a new method for climate models to take equatorial Pacific Ocean temperatures as an additional input. Then amazingly our model can simulate the hiatus well.”
Simulated temperature trend patterns (right) created by climate modelers at Scripps Institution of Oceanography, UC San Diego, showed strong agreement to observed boreal summer temperatures (left) for 2002-2012. JJA stands for June July August. Image courtesy of Nature
“Specifically the model reproduced the seasonal variation of the hiatus, including a slight cooling trend in global temperature during northern winter season,” said Xie, also in a statement. “In summer, the equatorial Pacific’s grip on the Northern Hemisphere loosens, and the increased greenhouse gases continue to warm temperatures, causing record heat waves and unprecedented Arctic sea ice retreat.”
According to the scientists, when the natural climate cycle that governs ocean cooling reverses and the Pacific waters begin warming again, the increase in global temperatures will resume “with vigor.” The study authors said it is not known if the current cooling phase will last as long as the last one, which lasted from roughly 1940 to the early 1970s, and added that predicting equatorial Pacific conditions more than a year in advance is “beyond the reach of current science.”
“That speaks to the challenge in predicting climate for the next few years,” said Xie. “We don’t know precisely when we’re going to come out of [the hiatus] but we know that over the timescale of several decades, climate will continue to warm as we pump more greenhouse gases into the atmosphere.”
“These compelling new results provide a powerful illustration of how the remote eastern tropical Pacific guides the behavior of the global ocean-atmosphere system, in this case exhibiting a discernible influence on the recent hiatus in global warming,” Dan Barrie, program manager at CPO, said in a statement.
The study, “Recent global-warming hiatus tied to equatorial Pacific surface cooling,” was published online in the journal Nature on Aug. 28. Along with CPO, the National Science Foundation, the National Basic Research Program of China supported the research.
To learn more, visit: http://scripps.ucsd.edu/news/13251 |
History of Banking in Iran
From: "Tejarat" (Trade), The Internal Publication of Bank of Tejarat
Before a bank in its present form was established in Iran, banking operations had been carried out in traditional form, or in other words in the form of money changing. Simultaneous with promotion of trade and business in the country, more people chose money changing as their occupation. Exchanges of coins and hard currencies were also common in Iran.
Before the advent of the Achaemenid Dynasty, banking operations had been carried out by temples and princes and seldom had ordinary people been engaged in this occupation.
During the Achaemenid era, trade boomed and subsequently banking operation expanded to an extent that Iranians managed to learn the banking method from the people of Babylon.
Following a boost in trade and use of bank notes and coins in trade during the Parthian and Sassanian eras, exchange of coins and hard currencies began in the country.
Some people also managed to specialize in determining the purity of coins. Bank notes and gold coins were first used in the country following the conquest of Lidi by Achaemenid king Darius The Great in 516 B.C. At that time, a gold coin called Derick was minted as the Iranian currency.
During the Parthian and Sassanids eras, both Iranian and foreign coins were used in trade in the country. However, with the advent of Islam in Iran, money changing and use of bank notes and coins in trade faced stagnation because the new religion forbade interest in dealing.
In the course of Mongol rule over Iran, a bank note which was an imitation of Chinese bank notes was put in circulation. The bank notes, called Chav bore the picture and name of Keikhatu. On one side of the bank notes there was the following sentence: "Anybody who does not accept this bank note, will be punished along with his wife and children." The face value of the bank notes ranged from half to 10 dirhams. Besides Chav, other bank notes were used for a certain period of time in other Iranian cities and then got out of circulation. These bank notes were called `Shahr-Rava' which meant something that was in use in cities.
Before the printing of first bank notes by the Bank Shahanshahi (Imperial Bank), a kind of credit card called Bijak had been issued by money dealers. It was in fact a receipt of a sum of money taken by money dealers from the owners of Bijak. The credence of the Bijak depended on the creditability of the money dealer who had issued it.
As mentioned before, money changing got out of fashion with the advent of Islam under which usury is strictly forbidden. At that time, only a few persons with weak religious faith continued their occupation as money dealers. It was the same persons who promoted usury even during the post-Islamic era. They offered various excuses to justify their unlawful act.
With a boost in trade during the rule of Safavid Dynasty, particularly during the reign of Shah Abbas the Great, money changing brisked again and wealthy money dealers started their international activities by opening accounts in foreign banks. Major centers for money changing at that time were Tabriz, Mashhad, Isfahan, Shiraz and Boushehr.
Money changing continued until the establishment of New East Bank in 1850. With the establishment of the bank, money changing actually came to a standstill. The New East Bank was in fact the first banking institute in its present form established in Iran. It laid the foundation of banking operations in the country. It was a British bank whose headquarters was in London. The bank was established by the British without receiving any concession from the Iranian government.
The bank opened its branches in the cities of Tabriz, Rasht, Mashhad, Esfahan, Shiraz and Boushehr. Of course, at that time, foreigners were free to engage in economic and trade activities in the country without any limitations. For the first time, the New East Bank allowed individuals to open accounts, deposit their money with the bank and draw checks. It was at this time that people began to draw checks in their dealings.
In order to compete with money dealers, the bank paid interest on the fixed deposits and current accounts of its clients. The head office of the bank in Tehran issued five `qeran' bank notes in the form of drafts. These drafts were used by people in their everyday's dealings and could be changed into silver coins when offered at the bank. According to a concession granted by the Iranian government to Baron Julius De Reuter in 1885, Bank Shahanshahi (Imperial Bank) was established. This bank purchased the properties and assets of the New East Bank, thus putting an end to the baking operations of the former.
The activities of Bank Shahanshahi ranged from trade transactions, printing bank notes, and serving as the treasurer of the Iranian government at home and abroad in return for piecework wage.
In return for receiving this concession, Reuter obliged to pay six percent of the annual net income of the bank, providing that the sum should not be less than 4,000 pounds, and 16 percent of incomes from other concessions to the Iranian government.
The legal center of the bank was in London and it was subject to the British laws but its activities were centered in Tehran.
In 1894, the right of printing bank notes was purchased from Bank Shahanshahi for a sum of 200,000 pounds and ceded to the Bank Melli of Iran.
Bank Shahanshahi continued its activities until 1948 when its name was changed into Bank of Britain in Iran and Middle East. The activities of the bank continued until 1952.
In 1856, a Russian national by the name of Jacquet Polyakov, received a concession from the then government of Iran for establishment of Bank Esteqrazi for 75 year. Besides, banking and mortgage operations, the bank had an exclusive right of public auction. In 1898 the Tzarist government of Russia bought all shares of the bank for its political ends. Under a contract signed with Iran, the bank was transferred to the Iranian government in 1920. The bank continued its activities under the name of Bank Iran until 1933 when it was incorporated into the Bank Keshavarzi (Agriculture Bank).
Bank Sepah was the first bank to be established with Iranian capitals in 1925 under the name of Bank Pahlavi Qoshun, in order to handle the financial affairs of the military personnel and set up their retirement fund. The capital of the bank was 388,395 tomans (3.88 million rials).
With Bank Sepah opening its branches in major Iranian cities, the bank began carrying financial operations such as opening of current accounts and transfer of money across the country. The Iran-Russia Bank was formed by the government of the former Soviet Union in 1926 with an aim of facilitating trade exchanges between the two countries.
The headquarters of the bank was in Tehran with some branches being inaugurated in northern parts of the country. The bank dealt with financial affairs of institutes affiliated to the government of the former Soviet Union and trade exchanges between the two countries. The activities of this bank, which were subjected to Iranian banking regulations, continued until 1979. In that year, this bank along with 27 other state-owned or private banks were nationalized under a decision approved by the Revolutionary Council of the Islamic Republic of Iran.
The proposal to establish a national Iranian bank was first offered by a big money dealer to Qajar king Naser-o-Din Shah before the Constitutional Revolution. But the Qajar king did not pay much attention to the proposal. However, with the establishment of constitutional rule in the country, the idea of setting up a national Iranian bank in order to reduce political and economic influence of foreigners gained strength and at last in December 1906 the establishment of the bank was announced and its articles of association compiled.
In April 1927, the Iranian Parliament gave final approval to the law allowing the establishment of Bank Melli of Iran. But, due to problems arising from preparing a 150 million rial capital needed by the bank, the Cabinet ministers and the parliament's financial commission approved the articles of association of the bank in the spring of 1928. The bank was established with a primary capital of 20 million rials, 40 percent of which was provided by the government. The bank was formally inaugurated in September 1928.
The Central Bank of Iran was established in 1928, tasked with trade activities and other operations (acting as the treasurer of the government, printing bank notes, enforcing monetary and financial policies and so on). The duties of the CBI included making transactions on behalf of the government, controlling trade banks, determining supply of money, foreign exchange protective measures (determining the value of hard currencies against rial) and so on.
In June 1979, Iranian banks were nationalized and banking regulations changed with the approval of the Islamic banking law (interest free), and the role of banks in accelerating trade deals, rendering services to clients, collecting deposits, offering credits to applicants on the basis of the CBI's policies and so on was strengthened. |
A BRIEF HISTORY OF TURKIC LANGUAGES
The Turkic languages are spoken over a large geographical area in Europe and Asia. It is spoken in the Azeri, the Türkmen, the Tartar, the Uzbek, the Baskurti, the Nogay, the Kyrgyz, the Kazakh, the Yakuti, the Cuvas and other dialects. Turkish belongs to the Altaic branch of the Ural-Altaic family of languages, and thus is closely related to Mongolian, Manchu-Tungus, Korean, and perhaps Japanese. Some scholars have maintained that these resemblances are not fundamental, but rather the result of borrowings, however comparative Altaistic studies in recent years demonstrate that the languages we have listed all go back to a common Ur-Altaic.
Turkish is a very ancient language going back 5500 to 8500 years. It has a phonetic, morphological and syntactic structure, and at the same time it possesses a rich vocabulary. The fundamental features, which distinguish the Ural-Altaic languages from the Indo-European, are as follows:
1. Vowel harmony, a feature of all Ural-Altaic tongues.
2. The absence of gender.
4. Adjectives precede nouns.
5. Verbs come at the end of the sentence.
The oldest written records are found upon stone monuments in Central Asia, in the Orhon, Yenisey and Talas regions within the boundaries of present-day Mongolia. These were erected to Bilge Kaghan (735), Kültigin (732), and the vizier Tonyukuk (724-726). These monuments document the social and political life of the Gokturk Dynasty.
After the waning of the Gokturk state, the Uighurs produced many written texts that are among the most important source works for the Turkish language. The Uighurs abandoned shamanism (the original Turkish religion) in favor of Buddhism, Manichaeanism and Brahmanism, and translated the pious and philosophical works into Turkish. Examples are Altun Yaruk, Mautrisimit, Sekiz Yükmek, Huastunift. These are collected in Turkische Turfan-Texte. The Gokturk inscriptions, together with Uighur writings, are in a language called by scholars Old Turkish. This term refers to the Turkish spoken, prior to the conversion to Islam, on the steppes of Mongolia and Tarim basin.
A sample of Gokturk Inscriptions, commissioned by Gokturk Khans. One of several in Mongolia, near river Orkhun, dated 732-735. Example statement (from Bilge Khan): "He (Sky God or "Gok Tanri") is the one who sat me on the throne so that the name of the Turkish Nation would live forever."
The Turkish that developed in Anatolia and Balkans in the times of the Seljuk’s and Ottomans is documented in several literary works prior to the 13th century. The men of letters of the time were, notably, Sultan Veled, the son of Mevlana Celaleddin-i Rumi, Ahmed Fakih, Seyyad Hamza, Yunus Emre, a prominent thinker of the time, and the famed poet, Gulsehri. This Turkish has a dialect which falls into the southwestern dialects of the Western Turkish language family and also into the dialects of the Oguz Türkmen language group. When the Turkish spoken in Turkey is considered in a historical context, it can be classified according to three distinct periods:
1. Old Anatolian Turkish (old Ottoman - between the 13th and the 15th centuries)
2. Ottoman Turkish (from the 16th to the 19th century)
3. 20th century Turkish
The Turkish Language up to the 16th Century With the spread of Islam among the Turks from the 10th century onward, the Turkish language came under heavy influence of Arabic and Persian cultures. The "Divanü-Lügati't-Türk" (1072), the dictionary edited by Kasgarli Mahmut to assist Arabs to learn Turkish, was written in Arabic. In the following century, Edip Ahmet Mahmut Yükneri wrote his book "Atabetü'l-Hakayik", in Eastern Turkish, but the title was in Arabic. All these are indications of the strong influence of the new religion and culture on the Turks and the Turkish language. In spite of the heavy influence of Islam, in texts written in Anatolian Turkish the number of words of foreign origin is minimal. The most important reason for this is that during the period mentioned, effective measures were taken to minimize the influence of other cultures. For example, during the Karahanlilar period there was significant resistance of Turkish against the Arabic and Persian languages. The first masterpiece of the Muslim Turks, "Kutadgu Bilig" by Yusuf Has Hacib, was written in Turkish in 1069. Ali Nevai of the Çagatay Turks defended the superiority of Turkish from various points of view vis-à-vis Persian in his book "Muhakemetül-Lugatein", written in 1498.
During the time of the Anatolian Seljuk’s and Karamanogullari, efforts were made resulting in the acceptance of Turkish as the official language and in the publication of a Turkish dictionary, "Divini Turki", by Sultan Veled (1277). Ahmet Fakih, Seyyat Hamza and Yunus Emre adopted the same attitude in their use of ancient Anatolian Turkish, which was in use till 1299. Moreover, after the emergence of the Ottoman Empire, Sultan Orhan promulgated the first official document of the State, the "Mülkname", in Turkish. In the 14th century, Ahmedi and Kaygusuz Abdal, in the 15th century Süleyman Çelebi and Haci Bayram and in the 16th century Sultan Abdal and Köroglu were the leading poets of their time, pioneering the literary use of Turkish. In 1530, Kadri Efendi of Bergama published the first study of Turkish grammar, "Müyessiretül-Ulum".
The outstanding characteristic in the evolution of the written language during these periods was that terminology of foreign origin was accompanied with the indigenous. Furthermore, during the 14th and 15th centuries translations were made particularly in the fields of medicine, botany, astronomy, mathematics and Islamic studies, which promoted the introduction of a great number of scientific terms of foreign origin into written Turkish, either in their authentic form or with Turkish transcriptions. Scientific treatises made use of both written and vernacular Turkish, but the scientific terms were generally of foreign origin, particularly Arabic.
The Evolution of Turkish since the 16th Century
The mixing of Turkish with foreign words in poetry and science did not last forever. Particularly after the 16th century foreign terms dominated written texts, in fact, some Turkish words disappeared altogether from the written language. In the field of literature, a great passion for creating art work of high quality persuaded the ruling elite to attribute higher value to literary works containing a high proportion of Arabic and Persian vocabulary, which resulted in the domination of foreign elements over Turkish. This development was at its extreme in the literary works originating in the Ottoman court. This trend of royal literature eventually had its impact on folk literature, and folk poets also used numerous foreign words and phrases. The extensive use of Arabic and Persian in science and literature not only influenced the spoken language in the palace and its surroundings, but as time went by, it also persuaded the Ottoman intelligentsia to adopt and utilize a form of palace language heavily reliant on foreign elements. As a result, there came into being two different types of language. One in which foreign elements dominated, and the second was the spoken Turkish used by the public.
From the 16th to the middle of the 19th century, the Turkish used in science and literature was supplemented and enriched by the inclusion of foreign items under the influence of foreign cultures. However, since there was no systematic effort to limit the inclusion of foreign words in the language, too many began to appear. In the mid-19th century, Ottoman Reformation (Tanzimat) enabled a new understanding and approach to linguistic issues to emerge, as in many other matters of social nature. The Turkish community, which had been under the influence of Eastern culture, was exposed to the cultural environment of the West. As a result, ideological developments such as the outcome of reformation and nationalism in the West, began to influence the Turkish community, and thus important changes came into being in the cultural and ideological life of the country.
The most significant characteristic with respect to the Turkish language was the tendency to eliminate foreign vocabulary from Turkish. In the years of the reformation, the number of newspaper, magazines and periodicals increased and accordingly the need to purify the language became apparent. The writing of Namik Kemal, Ali Suavi, Ziya Pasa, Ahmet Mithat Efendi and Semsettin Sami, which appeared in various newspapers, tackled the problem of simplification. Efforts aimed at "Turkification" of the language by scholars like Ziya Gökalp became even more intensive at the beginning of the 20th century. Furthermore, during the reform period of 1839, emphasis was on theoretical linguistics whereas during the second constitutional period it was on the implementation and use of the new trend. Consequently new linguists published successful examples of the purified language in the periodical "Genç Kalemler" (Young Writers).
The Republican Era and Language Reform
With the proclamation of the Republic in 1923 and after the process of national integration in the 1923-1928 period, the subject of adopting a new alphabet became an issue of utmost importance. Mustafa Kemal Atatürk had the Latin alphabet adapted to the Turkish vowel system, believing that to reach the level of contemporary civilization, it was essential to benefit from western culture. The creation of the Turkish Language Society in 1932 was another milestone in the effort to reform the language. The studies of the society, later renamed the Turkish Linguistic Association, concentrated on making use again of authentic Turkish words discovered in linguistic surveys and research and bore fruitful results.
At present, in conformity with the relevant provision of the 1982 Constitution, the Turkish Language Association continues to function within the organizational framework of the Atatürk High Institution of Culture, Language and History. The essential outcome of the developments of the last 50-60 years is that whereas before 1932 the use of authentic Turkish words in written texts was 35-40 percent, this figure has risen to 75-80 percent in recent years. This is concrete proof that Atatürk's language revolution gained the full support of public.
Reference: Ministry of Foreign Affairs/The Republic of Turkey
Some selected references are provided below for further reading. |
Although some women were now working outside the home, spinning was still common pastime, and the spinning wheel was a common tool in most homes, but the yarn was now used for knitting or other sewing projects rather than for weaving.
The spinning wheel was a natural progression from the hand spindle. It was realized that the hand spindle could be held horizontally in a frame and turned, not by twisting it with the fingers, but by a wheel driven belt. This spinning wheel, or walking wheel, consists of a low table above which a drive wheel is mounted at one end and a spinning mechanism at the other end. The wheel was turned by hand and drove the spindle mechanism by means of a drive belt. The spindle was mounted horizontally so that it could be rotated by the drive belt. The distaff, carrying the mass of the fiber, was held in the left hand of the spinner, and the wheel was slowly turned with the right hand. Holding the fiber at an angle to the spindle produced the necessary twist. |
The Giant Tortoise practices internal fertilization. Between the months of January and August, the male begins sniffing the air for a female's scent. After he has found a female, he chases her down and begins courtship with intimidation. He rams her with the front of his shell and nips at her exposed legs until she draws them in, immobilizing her. He then mates with her. Nesting occurs at different times, but usually between June and December. The female travels to dry sunny lowlands where the eggs receive adequate warmth for incubation. She lays an average of 10 eggs in a nest, which she buries under the surface with her strong back legs. Incubation time for different clutches ranges from three to eight months, the longer periods most likely having a relation to cooler weather. When the eggs hatch, the baby tortoises are forced to fend for themselves. Most die in the first ten years of life. |
How Conventional Energy
Harms the Planet
The problem is how that electricity is generated.
Most utility companies in the United States use coal or natural gas as inputs to create energy for their power plants – both are dirty, fossil fuels that are non-renewable. Coal is perhaps the dirtiest form of fossil fuel energy because it emits tons of pollution each year, including particulate matter and mercury, both of which contribute to a wide array of human health problems.
An even greater concern is the emission of greenhouse gases – the primary cause of climate change.
To measure the impact of these greenhouse gas emissions, we calculate a carbon footprint, which is the overall greenhouse gases emitted by a particular building or process, expressed in carbon dioxide (CO2) equivalents. The following chart illustrates the carbon footprint of utilities in each region of America.
Solar Is THE Green Alternative
Unlike traditional power generation sources, solar technologies produce electricity using a renewable source – the sun – so there’s no limit to how much energy can be gained by tapping into the sun.
Solar systems also generate electricity without creating noise or emitting pollutants such as greenhouse gases, smog, acid rain, or water pollution. Even when the emissions related to solar cell manufacturing are counted, solar panels produce less than 15% of the CO2 emitted from a conventional coal-fired power plant.
“Go Green” At No Cost
A Greenzu Power Purchase Agreement (PPA) allows clients to make an immediate environmental impact, at no cost. We make helping the planet “pencil out”. Typical clients keep 746 tons of greenhouse gases out of the atmosphere, which is the equivalent of preserving 147 acres of forest and taking 232 tons of waste out of our landfills.
Discover how a commercial solar system through a Greenzu PPA could be your ticket to no-hassle renewable energy today! |
It is generally believed that the first Christmas tree was of German origin dating from the time of St. Boniface, English missionary to Germany in the 8th century. He replaced the sacrifices to the Norse god Odin’s sacred oak — some say it was Thor’s Thunder Oak — by a fir tree adorned in tribute to the Christ child. The legend is told that Boniface found a group of “pagans” preparing to sacrifice a boy near an oak tree near Lower Hesse, Germany. He cut down the oak tree with a single stroke of his ax and stopped the sacrifice. A small fir tree sprang up in place of the oak. He told the pagans that this was the “tree of life” and stood for Christ.
A legend began to circulate in the early Middle Ages that when Jesus was born in the dead of winter, all the trees throughout the world shook off their ice and snow to produced new shoots of green. The medieval Church would decorate outdoor fir trees — known as “paradise trees” — with apples on Christmas Eve, which they called “Adam and Eve Day” and celebrated with a play.
During Renaissance times there are records that trees were being used as symbols for Christians first in the Latvian capital of Riga in 1510. The story goes that it was attended by men wearing black hats in front of the House of Blackheads in the Town Hall Square, who following a ceremony burnt the tree. But whether it was for Christmas or Ash Wednesday is still debated. I’ve stood in that very square myself in the Winter, surrounded by snow.
Accounts persist that Martin Luther introduced the tree lighted with candles in the mid 16th century in Wittenberg, Germany. He often wrote of Advent and Christmas. One of his students wrote of Luther saying:
For this is indeed the greatest gift, which far exceeds all else that God has created. Yet we believe so sluggishly, even though the angels proclaim and preach and sing, and their lovely song sums up the whole Christian faith, for “Glory to God in the highest” is the very heart of worship.
Returning to his home after a walk one winter night, the story goes; Luther tried unsuccessfully to describe to his family the beauty of the starry night glittering through the trees. Instead, he went out and cut down a small fir tree and put lighted candles upon it.
In a manuscript dated 1605 a merchant in Strasbourg, Germany (at that time) wrote that at Christmas they set up fir trees in the parlors and “hang thereon roses cut out of paper of many colors, apples, wafers, spangle-gold and sugar…” Though the selling of Christmas trees is mentioned back to the mid-1500s in Strasbourg, the custom of decorating the trees may have developed from the medieval Paradise Play. This play was a favorite during the Advent season because it ended with the promise of a Savior. The action in the play centered around a fir tree hung with apples.
The earliest date in England for a Christmas Tree was at Queen’s Lodge, Windsor by Queen Charlotte, the German-born wife of George III, for a party she held on Christmas Day, 1800, for the children of the leading families in Windsor. Her biographer Dr. John Watkins describes the scene:
In the middle of the room stood an immense tub with a yew tree placed in it, from the branches of which hung bunches of sweetmeats, almonds, and raisins in papers, fruits and toys, most tastefully arranged, and the whole illuminated by small wax candles. After the company had walked around and admired the tree, each child obtained a portion of the sweets which it bore together with a toy and then all returned home, quite delighted.
The Christmas Tree was most popularized in England, however, by the German Prince Albert soon after his marriage to Queen Victoria. In 1841, he began the custom of decorating a large tree in Windsor Castle. In 1848, a print showing the Royal couple with their children was published in the “Illustrated London News.” Albert gave trees to Army barracks and imitation followed. From this time onwards, the popularity of decorated fir trees spread beyond Royal circles and throughout society. Even Charles Dickens referred to the Christmas tree as that “new German toy.” German immigrants brought the custom to the United States, and tree decorating is recorded back to 1747 in Bethlehem, Pennsylvania.
Many individuals and communities vie for the honor of having decorated the first Christmas tree in America. One interesting story tells of Hessian (German) soldiers who fought for George III in the Revolutionary War. As they were keeping Christmas in Trenton, New Jersey around a decorated tree, they left their posts unguarded. George Washington and his troops were hungry and freezing at Valley Forge, but they planned their attack with the knowledge that the Hessians would be celebrating and thus would not be as able to defend themselves.
Christmas trees really became quite popular in the United States following the invention of the electric light. In 1895, President Grover Cleveland decorated the tree at the White House with electric Lights. This idea caught on and spread across the country.
How do you decorate your Christmas Tree?
Bill Petro, your friendly neighborhood historian |
In conjunction with a scientist from the University of Michigan, the Caltech team who originally coined the term Planet Nine in 2016 have written a new paper about its formation, and the subsequent layout of the outer solar system. Having set out the evidence for this proposed object in the paper (1), they note three possible scenarios for its formation:
1) The planet’s capture from the retinue of a passing star; or, alternatively, the capture of a free-floating interstellar planet
2) The planet’s semi-ejection from the inner solar system and subsequent gradual drift outwards
3) The planet’s formation in situ.
All three of these scenarios require certain conditions for them to work, which means that no single formation theory stands out as particularly probable. The capture and scattering models depend upon the interjection of outside bodies (passing stars or brown dwarfs, or objects in the Sun’s birth cluster). The in situ formation of a planet so far from the Sun implies that the Sun’s protoplanetary disk was significantly larger than generally accepted. The formation of Planet Nine in its calculated position thus remains problematic, based upon standard models of planetary and solar system formation (e.g. the Nice model). Further, whatever processes which placed it in its proposed current position would have significantly affected the layout of the Kuiper belt within its overarching orbit. This factor is what the current investigation described by this paper aims to solve.
This paper then describes computer simulations of the early Kuiper belt, and how the shape and extent of the fledgling belt may have affected the complex interplay between it, Planet Nine, and the objects in the extended scattered disk (1). The research team modelled two distinct scenarios for the early Kuiper belt, each of which matches one or more formation scenarios for Planet Nine. The first is a ‘narrow’ disk, similar to that observed: The Kuiper disk appears to be truncated around 50AU, with objects found beyond this zone likely having been scattered outwards by processes which remain contentious. The second scenario is a ‘broad’ disk, where objects in the Kuiper belt would have routinely populated the space between Neptune and the proposed orbit of Planet Nine, hundreds of astronomical units out. This would match a formation scenario involving an extensive protoplanetary disk. Read More…
Brown dwarfs are notoriously hard to find. It’s not so bad when they are first born: They come into the Universe with a blast, shedding light and heat in an infantile display of vigour. But within just a few million years, they have burned their available nuclear fuels, and settle down to consume their leaner elemental pickings. Their visible light dims considerably with time to perhaps just a magenta shimmer. But they still produce heat, and the older they get, the more likely that a direct detection of a brown dwarf will have to be in the infra-red spectrum.
This doesn’t make them much easier to detect, though, because to catch these faint heat signatures in the night sky, you first need to have a cold night sky. A very cold night sky. Worse, water vapour in the atmosphere absorbs infra-red light along multiple stretches of the spectrum. The warmth and humidity of the Earth’s atmosphere heavily obscures infra-red searches, even in frigid climates, and so astronomers wishing to search in the infra-red either have to build IR telescopes atop desert mountains (like in Chile’s Atacama desert), or else resort to the use of space-based platforms. The downside of the latter is that the telescopes tend to lose liquid helium supplies rather quickly, shortening their lifespan considerably compared to space-based optical telescopes.
The first major sky search using a space telescope was IRAS, back in the 1980s. Then came Spitzer at the turn of the century, followed by Herschel, and then WISE about five years ago. Some infra-red telescopes conduct broad searches across the sky for heat traces, others zoom in on candidate objects for closer inspection. Each telescope exceeds the last in performance, sometimes by orders of magnitude, which means that faint objects that might have been missed by early searches stand more of a chance of being picked up in the newer searches.
The next big thing in infra-red astronomy is the James Webb Space Telescope (JSWT), due for launch in Spring 2019. The JSWT should provide the kind of observational power provided by the Hubble Space telescope – but this time in infra-red. The reason why astronomers want to view the universe in detail using infra-red wavelengths is that very distant objects are red-shifted to such a degree that their light tends to be found in the infra-red spectrum, generally outside Hubble’s operational parameters (1). Essentially, the JWST will be able to see deeper into space (and, therefore, look for objects sending their light to us from further back in time when the first stars and galaxies emerged). Read More…
It looks like it’ll be another long, lonely autumn for Dr Mike Brown on the summit of the Hawaiian dormant volcano Mauna Kea, searching for Planet Nine. He made use of the 8m Subaru telescope last year, and it looks like he’s back again this year for a second role of the dice (unless he does all this by remote control from Pasadena?). I can only assume, given the time of the year, that the constellation of Orion remains high on their list of haystacks to search.
A recent article neatly sums up the current state of play with the hunt for Planet Nine (1), bringing together the various anomalies which, together, seem to indicate the presence of an undetected super-Earth some twenty times further away than Pluto (or thereabouts). Given how much, I’ve written about this materials already, it seems unnecessary to go over the same ground. I can only hope that this time, Dr Brown and his erstwhile colleague, Dr Batygin, strike lucky. They have their sceptical detractors, but the case they make for Planet Nine still seems pretty solid, even if the gloss has come off it a bit recently with the additional OSSOS extended scattered disk object discoveries (2). But there’s nothing on Dr Brown’s Twitterfeed to indicate what his plans are regarding a renewed search for Planet Nine.
Even if the Planet Nine article’s discussion about a new hunt for the celestial needle in the haystack is misplaced, it does make a valid point that super-Earths, if indeed that is what this version of Planet X turns out to be, are common enough as exo-planets, and weirdly absent in our own planetary backyard. So a discovery of such an object way beyond Neptune would satisfy the statisticians, as well as get the bubbly flowing at Caltech. Dr Brown did seem to think that this ‘season’ would be the one. We await with bated breath…
Meanwhile, the theoretical work around Planet Nine continues, with a new paper written by Konstantin Batygin and Alessandro Morbidelli (3) which sets out the underlying theory to support the result of the 2016 computer simulations which support the existence of Planet Nine (4). Dr Morbidelli is an Italian astrophysicist, working in the south of France, who is a proponent of the Nice model for solar system evolution (named after the rather wonderful French city where he works). This model arises from a comparison between our solar system’s dynamics, and those of the many other planetary systems now known to us, many of which seem bizarre and chaotic in comparison to our own. Thus, the Nice model seeks to blend the kinds of dynamical fluctuations which might occur during the evolution of a star’s planetary system with both the outcomes witnessed in our own solar system, and the more extreme exoplanets observed elsewhere (5). It invokes significant changes in the positions of the major planets during the history of the solar system, for instance. These migrations have knock on effects which then drive other disturbances in the status quo of the early solar system, leading to the variations witnessed both here and elsewhere. For instance, Dr Morbidelli lists one of the several factors which brought about the Nice model:
Last month, scientists working on the Outer Solar System Origins Survey (OSSOS) published a large dataset of new Kuiper Belt Objects, including several new extended scattered disk objects discovered way beyond the main belt (1). These four new distant objects seemed to have a more random set of properties, when compared to the rather more neat array of objects which had previously been constituted the Planet Nine cluster. This led to scepticism among the OSSOS scientific team that there was any real evidence for Planet Nine. Instead, they argued, the perceived patterns of these distant objects might be a function of observational bias (2).
Whilst reporting on these new discoveries and their potential implications, I predicted that the debate was about to hot up, bringing forth a new series of Planet X-related articles and papers (3). Indeed, leading outer solar system scientists were publishing related materials in quick succession (4,5), each finding new correlations and patterns which might indicate the presence of an unseen perturbing influence.
Now, Caltech’s Konstantin Batygin has published an article analysing the impact of the discovery of these new extended scattered disk objects on the potential for a Planet Nine body. The short conclusion he draws is that although the objects are, on the face of it, randomly distributed, their property set is largely consistent with Caltech’s original thesis (6). They are either anti-aligned to the purported Planet Nine body (as the original cluster is thought to be), or aligned with it in a meta-stable array.
It’s a year since proposed the existence of Planet Nine (1). Despite the fact that its discovery remains elusive, there have been a great many academic papers written on the subject, and no shortage of serious researchers underpinning the theoretical concepts supporting its existence. Many have sought evidence in the solar system which indirectly points to the perturbing influence of this mysterious world; others have provided data which have helped to constrain the parameters of its orbit (by effectively demonstrating where it could NOT be). Throughout 2016, I have been highlighting these developments on the Dark Star Blog.
At the close of 2016, two further papers were published about Planet Nine. The first of these delves more deeply into the possibility that Planet Nine (Brown’s new name for Planet X, which seems to have caught on among astronomers keen to distance this serious search from, well, the mythological planet Nibiru) has a resonance relationship with some of the objects beyond the Edgeworth-Kuiper Belt which it is perturbing. These kinds of resonance relationships are not unusual in planetary orbital dynamics, so such a suggestion is not that odd, even given the eccentricities of the bodies involved here. The new research, from the University of California, Santa Cruz, bolsters the case for this kind of pattern applying to Planet Nine’s orbit:
“We extend these investigations by exploring the suggestion of Malhotra et al. (2016) (2) that Planet Nine is in small integer ratio mean-motion resonances (MMRs) with several of the most distant KBOs. We show that the observed KBO semi-major axes present a set of commensurabilities with an unseen planet at ~654 AU (P~16,725 yr) that has a greater than 98% chance of stemming from a sequence of MMRs rather than from a random distribution.” (3)
Their randomised ‘Monte Carlo’ calculations provide a best fit with a planet of between 6 and 12 Earth masses, whose eccentric orbit is inclined to the ecliptic by about 30 degrees. They are unable to point to a specific area of the sky to search, but provide a broad-brush region which they favour as most probable. Dr Millholland has also helpfully provided a 3D manipulable 3D figure of the cluster of extended scattered disk objects allegedly affected by the purported Planet Nine, alongside their extrapolated orbit for it (4). Read More…
Dr Konstantin Batygin and Dr Mike Brown argue in their latest paper that the retrograde Kuiper Belt Objects Niku and Drac could have once been extended scattered disk objects (1). If you have been following these blogs during 2016, it will come as no surprise to you to hear that the influence which perturbed them into their anomalous current orbits was Planet Nine, the 10+Earth-mass planet lurking several hundred-plus Astronomical Units away, whose gravitational influence seems to be influencing the objects in and beyond the Kuiper Belt beyond Neptune (2):
“Adopting the same parameters for Planet Nine as those previously invoked to explain the clustering of distant Kuiper belt orbits in physical space, we carry out a series of numerical experiments which elucidate the physical process though which highly inclined Kuiper belt objects with semi-major axes smaller than a < 100 AU are generated. The identified dynamical pathway demonstrates that enigmatic members of the Kuiper belt such as Drac and Niku are derived from the extended scattered disk of the solar system.” (1) Read More…
Astronomers have announced the discovery of the third most distant object in the solar system, designated 2014 UZ224 (1). At a distance of 91.6AU, it is pipped to the title of ‘most distant solar system object’ by V774104 at 103AU (2), followed by the binary dwarf planet Eris at 96.2AU(3). The new scattered disk object lies approximately three times the distance of Pluto away, and may be over 1000km in diameter – potentially putting it into the dwarf planet range. Its 1140 year orbit is notably eccentric, which is becoming more expected than otherwise with this category of trans-Neptunian object.
The find is a fortunate byproduct of the Dark Energy Survey, which seems to be rather good at picking out these dark, distant solar system objects. It was first spotted in 2014, with follow-up observations which have firmed up its orbital properties, but clearly delayed the announcement of its existence until now. These follow-up observations were rather scatty over time, and so the Dark Energy team, led by David Gerdes of the University of Michigan, developed software to establish its orbital properties: Read More…
Almost nine months after the release of their paper about the likely existence of Planet Nine (1), Drs Mike Brown and Konstantin Batygin have secured a sizeable chunk of valuable time on the Subaru telescope, based in Hawaii. If they’re right about where it is, and luck is on their side, then they may detect the elusive planet within weeks. Brown and Batygin think they’ve narrowed it down to roughly 2,000 square degrees of sky near Orion, which will take approximately 20 nights of telescope time to cover with the powerful 8.2-meter optical-infrared Subaru telescope at the summit of Maunakea, Hawaii, which is operated by the National Astronomical Observatory of Japan (2). Mike Brown is quite gung-ho about it, as can be gleaned from these extracts from a recent interview with the L.A. Times:
“”We are on the telescope at the end of September for six nights. We need about 20 nights on the telescope to survey the region where we think we need to look. It’s pretty close to the constellation Orion…We’re waiting for another couple of weeks before it’s up high enough in the sky that we can start observing it and then we’re going to start systematically sweeping that area until we find it.
“”It makes me think of the solar system differently than I did before. There’s the inner solar system, and now we are some of the only people in the world who consider everything from Neptune interior to be the inner solar system, which seems a little crazy.”” (3)
Let’s hope they’re on the money. They have quite a lot to say about some of the correspondence that comes their way from members of what might loosely be termed ‘the Planet X community’.
The two scientists, Scott Sheppard and Chad Trujillo, who first recognised the clustering of objects thought to reveal the presence of ‘Planet Nine’ (1), have announced the discovery of three new objects. All three are highly distant objects (2). Two of them are extended scattered disk objects beyond the traditional Kuiper Belt, and fit reasonably well into the afore-mentioned cluster. The third, perhaps even more amazingly, is an object whose elongated orbit reaches way out into the distant Oort Cloud of comets, but which also never comes closer than the planet Neptune. So, this is the first outer Oort cloud object with a perihelion beyond Neptune, designated 2014 FE72.
Here’s how the announcement of these three new objects has been described in a press release from the Carnegie Institution for Science (3), where Scott Sheppard works:
A new Trans-Neptunian Object has been discovered whose quirkiness is breaking into new territory. This object, currently named ‘Niku’ after the Chinese adjective for ‘rebellious’, is seriously off-piste and heading in a highly inclined, retrograde motion around the Sun (1). Does this sound familiar? The retrograde motion is something which Zecharia Sitchin claimed for the rogue planet Nibiru. Niku…Nibiru. It sounds like the team who discovered this object, based at the Harvard-Smithsonian Center for Astrophysics (2), are having a bit of fun with us. Rest assured, this is not Nibiru, or anything like it. That said, something in the past interacted with this object to fling it into its strange orbital path, and at the moment the identity of that strongly perturbing influence is a definitive ‘unknown‘.
Additionally, Niku’s discovery has prompted the astrophysics team to consider a new cluster of objects (high inclination TNOs and Centaurs) which appear to share the same orbital plane. This, in itself, is an unexpected and exciting development. Could the influencing factor be the mysterious Planet Nine (3)?
“…The new TNO appears to be part of another group orbiting in a highly inclined plane, so [Matthew] Holman’s team tested to see if their objects could also be attributed to the gravitational pull of Planet Nine. It turns out Niku is too close to the solar system to be within the suggested world’s sphere of influence, so there must be another explanation. The team also tried to see if an undiscovered dwarf planet, perhaps similar to Pluto, could supply an explanation, but didn’t have any luck. “We don’t know the answer,” says Holman.” (1) |
Learning from ancient droughts
A project involving the geoscience community, but also biologists, archaeologists, historians, meteorologists and astrophysicists looked far into our past to examine how quickly ecosystems and civilizations are able to recover from catastrophic events. Working in unison, they pieced together the events that brought pharaohs to their knees, and how Egypt bounced back.
Some 11,300 years ago, the Sahara was dotted with lakes. Giraffes, hippopotamuses, lions, elephants, zebra, gazelles, cattle and horses roamed across grasslands that may have received ten times more rainfall than the same area today.
By 9,000 years ago, pastoralists had colonized much of the Sahara. They prospered for another 3,000 years, until a shift in the monsoon belt to lower latitudes steered potential rains away from the continent, causing catastrophic droughts. The pastoralists took refuge in the Sahel, Saharan highlands and Nile Valley, where they gave rise to numerous African cultures, including that of Pharaonic Egypt.
Those who settled in the Nile Valley were forced to abandon nomadic pastoralism for lack of summer rains. Instead, they adopted an agricultural way of life. Small sedentary communities gradually coalesced into large social groups. About 5,200 years ago, the first pharaoh managed to unify Upper and Lower Egypt into a single state with Memphis as its capital. A long period of prosperity followed, characterized by bountiful Nile floods that produced abundant grain harvests. Successive pharaohs took advantage of this prosperity to launch ambitious pyramid-building programmes to give themselves a tomb worthy of their rank. The pharaohs asserted their authority over the population by claiming the power to intercede with the gods to ensure the Nile River flooded each year. This strategy worked perfectly – until about 4,200 years ago when the harvests failed for six long decades. Brought about by a drop in rainfall at the Ethiopian headwaters during a prolonged El Niño cycle, this drought was so long and so severe that the Nile could be crossed on foot. With the pharaoh powerless to prevent the resulting famine, regional governors seized control.
It took 100 years for Egypt to reunify and thereby bring to an end a century of political and social chaos known as the First Intermediary Period. The return to stability heralded the advent of the Middle Kingdom. This time, the pharaohs would not make the same mistake. To avoid suffering the fate of their improvident predecessors, they would invest massively in irrigation and grain storage.
This work was part of an International Geoscience Programme (IGCP) project on the Role of Holocene Environmental Catastrophes in Human History (IGCP project 490). The project focused on the inter-disciplinary investigation of Holocene geological catastrophes, which are of importance for civilizations and ecosystems. The objective was to examine how quickly ecosystems and civilizations are able to recover from catastrophic events. With the growing recognition that major natural events can have abrupt global impacts, this project is a timely opportunity to assess the sensitivity of modern society to extreme natural threats.
<- Back to: Water |
First Form Greek: Flashcards
Based on the revolutionary First Form Latin series, First Form Greek is written for parents and teachers with or without a Greek background. Its goal is to present the grammar so logically and so systematically that anyone can learn it. At the same time, we have adapted the Latin First Form Seriesto account for the differences between Greek and Latin, such as the new alphabet, overlapping sounds, more variation within paradigms, and less regularity. First Form Greek overcomes these challenges with the addition of weekly vocabulary reviews, more frequent recitation, and an “expanded” dictionary entry for Greek verbs.
Recommended Prerequisites: At least two years of Latin grammar (ideally First and Second Form Latin) and the Greek Alphabet Book. However, students who are new to Greek may spend additional time in Lesson 1 and learn the Greek alphabet that way. Students who have completed these prerequisites (Second Form Latin and the Greek Alphabet Book) may begin First Form Greek as early as 6th grade.
The First Form Greek Flashcards are pre-cut and cover the vocabulary (Greek, lesson number, and any derivative, on one side; English on the other), Greek sayings (Greek and lesson number on one side, English on the other), and grammar forms(cue word/ending and declension or tense name on one side; Latin forms and lesson number on the other).
After finishing First Form Greek, the student will have mastered:
- The six indicative active tenses of the omega verb
- Present tense of the to be verb
- Two noun declensions
- First & second declension adjectives
- Personal & demonstrative pronouns
- Approximately 130 vocabulary words |
This is the first in a series of posts to explain some common medical problems to patients in a hopefully easy-to-understand manner.
Otitis externa (or “swimmer’s ear”) is an inflammation of the outer portion of the ear canal. It is different from a middle ear infection (“otitis media” or the typical “ear infection” that typically afflicts children) because otitis externa affects only the ear canal (see the red area in the picture below) while otitis media is a collection of pus behind the eardrum (see the yellow area in the picture below) that does not affect the ear canal.
Patients with otitis externa often have significant pain in the outer ear and may have swelling and/or drainage from their ear canal. One of the easiest ways to tell whether a patient has swimmer’s ear is the “tragal tug” — pulling outward on the cartilage of the ear (like your mother used to do when she was mad at you). Pulling on the ear will cause traction on the skin within the ear canal. When the skin inside of the ear canal is inflamed and is stretched, it will hurt. Therefore, patients with swimmer’s ear will usually have significant pain when their ears are pulled. The pain from inner ear infections usually doesn’t get much worse with the tragal tug — unless otitis externa is also present.
Mild cases of otitis externa can sometimes be treated by putting Burow’s Solution into the ear canal a few times a day. When a patient is diagnosed with otitis externa, drops containing antibiotics and steroids are often prescribed. It is a good idea to check the ear drum for signs of perforation before putting medications into the ear. If some medications get into the inner ear (the yellow area above), they can cause dizziness, ringing in the ears or even hearing loss. For example, Cortisporin Otic and other aminoglycosides have the potential to damage the vestibula with prolonged use. Quinolone/steroid combinations are less likely to cause such damage.
The Ear Wick
If you put drops into the ear canal and then stand upright, then the drops all collect on the bottom of the ear canal. Eventually, they either get absorbed or they drain out of the ear canal. Additionally, if the ear canal is swollen shut or nearly swollen shut, the medications may not get to the affected areas in the ear. An ear wick solves both problems.
An ear wick is a piece of sponge (or sometimes a piece of cotton) that is inserted into the ear canal. Topical medications are then put onto the ear wick and then capillary action pulls the medication further into the ear canal. The wick helps to keep the medications in the ear and helps to hold the medication along all surfaces of the ear canal.
As the ear heals, the wick usually falls out on its own. If not, a medical professional can easily remove it. |
An underground railroad system designed for efficient urban and suburban passenger transport. The tunnels usually follow the lines of streets, for ease of constructions by the cut-and-cover method in which an arched tunnel is built in an open trench, covered with earth and the street restored. Outlying parts of the system usually emerge to the surface.
The first subway was built in London (1860–63) by the cut-and-cover method; it used steam trains and was a success despite fumes. A three-mile section of London subway was built (1886–90) using a shield developed by J. H. Greathead: this is a large cylindrical steel tube forced forward through the clay by hydraulic jacks; the clay is removed and the tunnel walls built. Deep tunnels are thus possible, and there is no surface disturbance. This London "tube" was the first to use electrically-powered trains, which soon replaced steam trains everywhere. Elevators were provided for the deep stations, later mostly replaced by escalators. Many cities throughout the world followed London's lead, notably Paris (the Métro, begun 1898) and New York (begun 1900). With increasing road traffic in the second half of the 20th century, the value of subways was apparent, and many cities extended, improved, and automated their systems; some introduced quieter rubber-tired trains running on concrete guideways.
Related category TECHNOLOGY
Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact |
Large-scale destruction of magnetic fields in the Sun's atmosphere likely powers enormous solar explosions, according to a new observation from NASA's Ramaty High Energy Solar Spectroscopic Imager (RHESSI) spacecraft.
The explosions, called solar flares, are capable of releasing as much energy as a billion one-megaton nuclear bombs. The destruction of magnetic fields, called magnetic reconnection, was a leading theory to explain how solar flares could suddenly release so much energy, but there were other possibilities. The new picture from RHESSI confirms large-scale magnetic reconnection as the most likely scenario.
"Many observations gave hints that magnetic reconnection over large areas was responsible for solar flares, but the new pictures from RHESSI are the first that are really convincing," said Linhui Sui of the Catholic University of America, Washington, DC. "The hunt for the energy source of flares has been like a story where villagers suspect a dragon is on the loose because something roars overhead in the middle of the night, but only something resembling the tail of a dragon is ever seen. With RHESSI, we've now seen both ends of the dragon." Linhui is lead author of a paper on this research published October 20 in Astrophysical Journal Letters.
Magnetic reconnection can happen in the solar atmosphere because it is hot enough to separate electrons from atoms, producing a gas of electrically charged particles called plasma. Because plasma is electrically charged, magnetic fields and plasma tend to flow together. When magnetic fields and plasma are ejected from the Sun, the ends of the magnetic fields remain attached to the surface. As a result, the magnetic fields are stretched and forced together until they break under the stress, like a rubber band pulled too far, and reconnect -- snap to a new shape with less energy (Item 1).
The thin region where they reconnect is called the reconnection layer, and it is where oppositely directed magnetic fields come close enough to merge. Magnetic reconnection could power a solar flare by heating the Sun's atmosphere to tens of millions of degrees and accelerating electrically charged particles that comprise the plasma (electrons and ions) to almost the speed of light.
At such high temperatures, solar plasma will shine in X-rays, and RHESSI observed high-energy X-rays emitted by plasma heated to tens of millions of degrees in a flare on April 15, 2002. The hot, X-ray emitting plasma initially appeared as a blob on top of an arch of relatively cooler plasma protruding from the Sun's surface in the RHESSI images (Item 2, top row). The blob and arch structure is consistent with reconnection because the X-ray blob could be heated by reconnection and the part of the magnetic field that breaks and snaps back to the solar surface will assume an arch shape. (Magnetic fields are invisible, but RHESSI can see them indirectly. Since magnetic fields and plasma flow together, plasma can be steered by magnetic fields if the fields are strong enough. On the Sun, hot, glowing plasma flows along its invisible magnetic fields, making their shapes detectable by RHESSI.)
These structures have been seen before and hinted at reconnection, but the observations were not conclusive. However, as RHESSI made images of the 20-minute long flare, over the course of about 4 minutes during the most intense part of the flare, the X-ray emitting blob exhibited two characteristics consistent with large-scale magnetic reconnection.
First, the blob split in two (Item 2, middle row), with the top part ultimately rising away from the solar surface at a speed of about 700,000 miles per hour, or around 1.1 million km/hr (Item 2, bottom row). This is expected if extensive reconnection is occurring, because as the magnetic fields stretch, the reconnection layer also stretches, like taffy being pulled (Item 3). Plasma heated by reconnection squirts out of the top and bottom of the reconnection layer, forming the two X-ray blobs in the RHESSI pictures when the top and bottom are sufficiently far apart to be resolved as distinct areas.
Second, in both blobs, the area closest to the apparent reconnection layer was hottest, and the area furthest away was coolest, according to temperature measurements by RHESSI. This is also expected if reconnection is occurring, because as the magnetic fields break and reconnect, other magnetic fields nearby move in to the reconnection region and reconnect as well, since the overall, large-scale field continues to stretch. Thus, plasma is continuously heated and blasted out from the reconnection layer. The plasma closest to the reconnection area is the most recently expelled and therefore the hottest. Plasma further away was ejected earlier and had time to cool.
"This temperature gradient in the hot plasma was the clincher for me," said Dr. Gordon Holman, a Co-Investigator on RHESSI and co-author of the paper at NASA's Goddard Space Flight Center, Greenbelt, Md. "If some other process was powering the flare, the hot plasma would not appear like this."
"We estimate that 200 times the total energy consumed by humanity in the year 2000 was extracted from the magnetic field destroyed in this flare, using our RHESSI observations," said Holman.
Cite This Page: |
The hunt for elusive neutrinos will soon get its largest and most powerful tool yet: the enormous KM3NeT telescope, currently under development by a consortium of 40 institutions from ten European countries. Once completed KM3NeT will be the second-largest structure ever made by humans, after the Great Wall of China, and taller than the Burj Khalifa in Dubai but submerged beneath 3,200 feet of ocean!
KM3NeT so named because it will encompass an area of several cubic kilometers will be composed of lengths of cable holding optical modules on the ends of long arms. These modules will stare at the sea floor beneath the Mediterranean in an attempt to detect the impacts of neutrinos traveling down from deep space.
Successfully spotting neutrinos subatomic particles that dont interact with normal matter very much at all, nor have magnetic charges will help researchers to determine which direction they originated from. That in turn will help them pinpoint distant sources of powerful radiation, like quasars and gamma-ray bursts. Only neutrinos could make it this far and this long after such events since they can pass basically unimpeded across vast cosmic distances.
The only high energy particles that can come from very distant sources are neutrinos, said Giorgio Riccobene, a physicist and staff researcher at the National Institute for Nuclear Physics. So by looking at them, we can probe the far and violent universe.
In effect, by looking down beneath the sea KM3NeT will allow scientists to peer outward into the Universe, deep into space as well as far back in time.
The optical modules dispersed along the KM3NeT array will be able to identify the light given off by muons when neutrinos pass into the sea floor. The entire structure would have thousands of the modules (which resemble large versions of the hovering training spheres used by Luke Skywalker in Star Wars.)
In addition to searching for neutrinos passing through Earth, KM3NeT will also look toward the galactic center and search for the presence of neutrinos there, which would help confirm the purported existence of dark matter.
Explore further: CERN neutrino project on target
Read more about the KM3NeT project here. |
Tornado Impacts | Tornado Story | How Tornadoes Form | Activities | Tornado Safety
Make a Tornado in a Jar!
The swirling winds of a tornado are called a vortex. In this experiment you will make a vortex that looks like a real tornado! While real tornadoes happen in air, the vortex you make in this activity is in water. Both air and water are fluids. That means that they move in similar ways.
What you will need:
Make it happen!
As you twist the jar, the water inside up against the glass is pulled along due to its friction again the glass walls. The fluid toward the inside takes longer to get moving. But eventually both the glass jar and the fluid are spinning as you rotate the bottle. When you stop rotating the jar, the fluid inside keeps spinning. A mini twister can be seen for just a few seconds when the outer fluid slows down and the inner fluids continue to spin rapidly. Try it again!
Click here to find out how real tornadoes form!
Try it! Make a more complex model of a tornado! |
Permaculture principles provide a set of universally applicable guidelines that can be used in designing sustainable systems.
These principles are inherent in any permaculture design, in any climate, and at any scale. They have been derived from the thoughtful observation of nature, and from earlier work by ecologists, landscape designers and environmental science.
The principles have recently been reviewed by David Holmgren (one of the co-originators of permaculture) in his book Permaculture: Principles and Pathways Beyond Sustainability. We have decided to use this new set as a way of presenting more in-depth information and examples. We make links and connection to previous principles and show how they combine to create a powerful new way to think about our interaction with the world.
The principles encompass those stated in Introduction to Permaculture, by Bill Mollison & Reny Mia Slay:
- Relative location.
- Each element performs many functions.
- Each important function is supported by many elements.
- Efficient energy planning: zone, sector and slope.
- Using biological resources.
- Cycling of energy, nutrients, resources.
- Small-scale intensive systems; including plant stacking and time stacking.
- Accelerating succession and evolution.
- Diversity; including guilds.
- Edge effects.
- Attitudinal principles: everything works both ways, and permaculture is information and imagination-intensive.
and those in Permaculture, a Designers' Manual, by Bill Mollison:
- Work with nature rather than against.
- The problem is the solution.
- Make the least change for the greatest possible effect.
- The yield of a system is theoretically unlimited (or only limited by the imagination and information of the designer).
- Everything gardens (or modifies its environment).
This last set of principles, are often referred to as the philosophy behind permaculture.
You can find more resources and views on permaculture principles here. |
The image here of the Cosmic Microwave Background Radiation survey revealed the distribution of radiation and density of the universe when it was some 380,000 years old and a fraction of its present size.
The pattern confirmed what astrophysicists had predicted: that the background radiation in the universe was very smooth. The small but important differences in density across the entire universe are indicated by the colors. These differences represent quantum fluctuations at the time of the Big Bang.
The CMB proved that the universe began in an instantaneous expansion. The background radiation from that event spread evenly across the entire universe --much smaller than it is today-- and the radiation is still detectable today. It is traced to the time, 380,000 years after the Big Bang event, when photons were liberated as a result of a cooling universe. At that point the radiation pattern became "visible," although not in the optical range of light. The two surveys of the CMB (the second was the Wilkinson Microwave Anisotropy Map, or WMAP) solved two major puzzles about the Big Bang. The first was whether the universe in fact emerged in a single super-hot burst. The second question was the source of irregular densities in the primordial universe that eventually became the areas in which stars and galaxies would form.
The Cosmic Microwave Background (CMB) reveals the "quantum seeding" of the primordial universe, the minute fluctuations of radiation that the Big Bang spewed into an initially very small universe. As the universe expanded the traces of the distributed radiation were retained, so that tiny areas of density appear within an overall even pattern of radiation. The WMAP image shows that distribution at the time when the universe first became transparent, some 380,000 after the Big Bang event.
To scientists in astrophysics and related fields, the CMB image is a profound accomplishment. It confirms that the universe began in a single extremely hot flash. It shows the relationship between background radiation density differences and the subsequent development of galaxies in the areas of greater density. It is in those areas that hydrogen gas clouds massed that later formed the first galaxies and stars. |
(Phys.org) —The dunes of Titan tell cosmic tales. A Cornell senior and researchers have narrowed theories on why the hydrocarbon dunes – think plastic – on Saturn's largest moon are oriented in an unexpected direction, a solar system eccentricity that has puzzled space scientists.
Physics major George McDonald '14, who graduates May 25, attributes the oddball orientation of the dunes to long timescale changes in Saturn's and Titan's orbit around the sun, similar to the changes that cause ice ages on Earth.
On Earth, silica forms fine sand. On Titan, sandlike dunes form from hydrocarbon grain particulate – essentially a plastic version of Earth's sand. Planetary scientists expected the dunes to respond to easterly winds. Instead, they observed via images from NASA's Cassini mission to Saturn that the equatorial dunes appear to move in the "wrong" direction – from west to east.
"I studied whether changes in Titan's climate – due to orbital variations over a 45,000-year timescale – could affect the orientations of the dunes at the equator. The results suggest that they could," McDonald said. "This could help to explain why the current dune orientations don't seem to match what we'd expect, given the modern wind circulation found today."
McDonald presented this work, "Examining Effects of Orbital Forcing on Titan's Dune Orientations," at the Lunar and Planetary Science Conference in Houston in March.
Long before McDonald studied the wrong-way dune images from Cassini, scientists Ryan Ewing, assistant professor of earth science at Texas A&M, and Alex Hayes, Cornell assistant professor of astronomy, collaborated and theorized that the dunes were shaped by winds that change because of orbital forcing. Ewing and Hayes mentored McDonald in his research.
McDonald analyzed Cassini radar images, which use microwaves instead of light. The radar imager pierces the murky moon's atmosphere, unveiling its strikingly familiar geologic surface features.
Titan's geologic features possess down-to-Earth familiarity. The moon's thick atmosphere is in a perpetual state of organic smog, while wind and methane rain carve dunes, rivers, lakes and seas into its cold surface. NASA scientists believe Titan can provide insight into the processes that drive Earth's climate and surface.
Like a tethered child, Titan accompanies Saturn in orbit. The ringed planet's own 29.5-year orbit around the sun is slightly eccentric, as summer in Titan's southern hemisphere occurs when Saturn (and Titan) are closest to the sun. This makes southern summer warmer and faster, when compared to northern ones.
Like Earth, these orbital conditions change with time. Thirty-five thousand years ago, for example, Titan's northern summers were hotter. These variations – called Croll-Milankovitch cycles on this planet – drive Earth's ice ages.
McDonald completed an exhaustive analysis using climate models from additional collaborators to examine dune orientation for the past 45,000 years. On Earth, giant dunes can takes thousands of years to reorient to changing wind conditions. The slow wind speeds on Titan suggest that timescales for its dunes are even longer.
"This has brought us to the point of believing that these long-term orbital changes could indeed be affecting the dunes," he said.
This fall, McDonald will pursue his doctoral degree in planetary science at the Georgia Institute of Technology.
Explore further: Cassini captures familiar forms on Titan's dunes |
This morning we looked at Chapter 3 in Count like and Egyptian. This chapter discusses how to calculate areas of triangles and the area of a circle using Egyptian ideas of multiplication and division.
Our prior two blog posts about the book are here:
You can purchase the book here:
And, of course, a hat tip to Evelyn Lamb who pointed out this book to me about a month ago:
Since it had been a couple of weeks since we last looked at the book, we started with a quick review of Egyptian multiplication. Most of the ideas had stayed with the boys, which was actually pretty nice to see, but one little piece of the process got reversed in their mind. There’s more detail on this process in our first project from the book linked above.
To get going with the ideas in chapter 3 of the book, we spent a little bit of time talking through how to divide by 2. My younger son listed some procedures that he knows for dividing by 2 – long division, for example – and my older son showed how to reduce a complex division problem into pieces that you already knew how to do. This second approach is pretty similar to the approach discussed in the book:
One time you might find yourself dividing by two is when you are calculating the area of a triangle. We work through several examples of using Egyptian multiplication to calculate the area of a triangle:
The last part of the project was using Egyptian multiplication to find the area of a circle. The book claims that the Egyptians used the approximation , so in order to calculate (or approximate, I guess) the area of a circle we need to learn how to divide by 8.
We talk through how to do that building off of dividing by 2 and then find an approximate value for the area of a circle with radius 10.
The math history that we are learning in this book is really fun. What I really like about going through this book with kids, though, is all of the conversations about arithmetic help them build up their number sense. I’d definitely recommend this book to anyone looking for fun and different ways to talk about arithmetic with kids. |
Vital Results Standards for
Theme: Ballots, Polls, & Voting Booths
State Vital Results Standards to which this lesson relates:
1.5 Students draft, revise, edit, and critique written products so that final drafts are appropriate in terms of the following dimensions:
Purpose -- Intent is established and maintained within a given piece of writing.
Organization -- The writing demonstrates order and coherence.
Details -- The details contribute to development of ideas and information, evoke images, or otherwise elaborate on or clarify the content of the writing.
Voice or Tone -- An appropriate voice or tone is established and maintained.
1.6 Students’ independent writing demonstrates command of appropriate English conventions, including grammar, usage, and mechanics. This is evident when students:
1.6.a. Use clear sentences, correct syntax, and grade-appropriate mechanics so that what is written can be easily understood by the reader.
1.8 In written reports, students organize and convey information and ideas accurately and effectively. This is evident when students:
1.8.g. Establish an authoritative stance on a subject, and appropriately identify and address the reader's need to know;
1.8.h. Include appropriate facts and details, excluding extraneous and inappropriate information; and
1.8.i. Develop a controlling idea that conveys a perspective on the subject.
1.8.k. Organize text in a framework appropriate to purpose, audience, and content.
1.18 Students use computers, telecommunications, and other tools of technology to research, to gather information and ideas, and to represent information and ideas accurately and appropriately.
In persuasive writing, students judge, propose, and persuade. This is evident when students:
1.11.e. Take an authoritative stand on a topic;
1.11.f. Support the statement with sound reasoning; and
1.11.g. Use a range of strategies to elaborate and persuade.
Reasoning and Problem Solving
Problem Solving Process
2.2 Students use reasoning strategies, knowledge, and common sense to solve complex problems related to all fields of knowledge. This is evident when students:
2.2.aa. Seek information from reliable sources, including knowledge, observation, and trying things out;
2.2.cc. Consider, test, and justify more than one solution;
2.2.dd. Find meaning in patterns and connections (underlying concepts); and
2.2.aaa. Critically evaluate the validity and significance of sources and interpretations.
Types of Problems
2.3 Students solve problems of increasing complexity. This is evident
2.3.aaa. Solve problems that require processing several pieces of information simultaneously;
2.3.bbb. Solve problems of increasing levels of abstraction, and that extend to diverse settings and situations; and
2.3.c. Solve problems that require the appropriate use of qualitative and/or quantitative data based on the problem.
2.4 Students devise and test ways of improving the effectiveness of a system. This is evident when students:
2.4.a. Evaluate the effectiveness of a system;
2.4.b. Identify possible improvements
Roles and Responsibilities
3.13 Students analyze their roles and responsibilities in their family, their school, and their community.
3.7 Students make informed decisions. This is evident when students:
3.7.c. Describe and explain their decisions based on evidence;
3.7.d. Recognize others' points of view, and assess their decisions from others' perspectives;
3.7.e. Analyze and consider alternative decisions; and
3.7.f. Differentiate between decisions based on fact and those based on opinion.
3.7.cc. Describe and explain their decisions based on and logical argument.Civic and Social Responsibility
4.2 Students participate in democratic processes. This is evident when students:
4.2.a. Work cooperatively and respectfully with people of various groups to set community goals and solve common problems.
Continuity and Change
4.5 Students understand continuity and change. This is evident when students:
4.5.aaa. Analyze personal, family, systemic, cultural, environmental, historical, and societal changes over time - both rapid, revolutionary changes and those that evolve more slowly.
4.6 Students demonstrate understanding of the relationship between their local environment and community heritage and how each shapes their lives. This is evident when students:
4.6.bbb. Evaluate and predict how current trends (e.g., environmental, economic, social, political, technological) will affect the future of their local community and environment.
Arts, Language, and Literature
Responding To Media
5.14 Students interpret and evaluate a variety of types of media, including audio, graphic images, film, television, video, and on-line resources. This is evident when students:
5.14.d. Make connections among various components of a media presentation (graphics, text, sound, movement, and data) and analyze how these components form a unified message;
5.14.e. Support judgments about what is seen and heard through additional research and the checking of multiple sourcesHistory and Social Sciences
Meaning of Citizenship
6.9 Students examine and debate the meaning of citizenship and act as citizens in a democratic society. This is evident when students:
6.9.b. Analyze and debate the problems of majority rule and the protection of minority rights as written in the U.S. Constitution. |
Listen to today's episode of StarDate on the web the same day it airs in high-quality streaming audio without any extra ads or announcements. Choose a $8 one-month pass, or listen every day for a year for just $30.
You are here
The largest stars in the universe are bright, cool stars known as red supergiants. If placed at the center of the solar system, the mightiest of these would engulf all the planets out to Jupiter and beyond.
The two brightest red supergiants that are visible from Earth appear on opposite sides of the sky. One, which is in view in winter, is Betelgeuse, in Orion, the hunter. The other, blazing like a ruby in the south tonight, is Antares, in Scorpius. Both are among the brightest stars in the night sky.
In mythology, the home constellations of these stars are rivals: Scorpius was the scorpion that stung and killed Orion. But the stars themselves appear to be near twins.
From the reddish-orange colors of the stars, astronomers have long known that they have the same surface temperature. But astronomers had thought that Betelgeuse was the weaker star.
But recent research indicates that Betelgeuse is 640 light-years from Earth. That's farther than earlier measurements. So to appear as bright as it does in our sky, Betelgeuse must emit more light into space than had been thought.
When astronomers work out the numbers, they find that Betelgeuse and Antares each emit about 20,000 times more light than the Sun. So these two brilliant red rivals are twins -- shining on us from opposite sides of the sky.
Look for Antares fairly low in the south at nightfall, at the "heart" of the scorpion. It sets in the wee hours of the morning.
Script by Ken Croswell, Copyright 2010
- ‹ Previous
- Next › |
Explore this topic to find out:
Developed with the requirements of the national curriculum in mind, children will explore and investigate:
- That human beings need food and drink to stay alive and grow healthily.
- That some foods are more nutritious than others.
- That our bodies need balanced and varied diets.
- That understanding food labelling can help us choose and prepare healthier options.
- A visual guide full of ideas for reference to support cross-curricular topic planning.
- Organised by ages 5-7 and 7-11.
- Detailed activity and lesson plans aligned to individual curriculum areas.
- Each curriculum focus includes links to download the relevant resources.
- Downloadable planning tools for you to use.
- Designed to be played in the classroom, these videos can be used to enhance learning for this chosen topic.
For more information, download our trail overview.
Our topic videos are designed to introduce further classroom discussion.
Take part in a Healthy Eating experience
Healthy Eating Trail
Support your topic with a Healthy Eating store trail. Children will gain lots of practical knowledge, such as how to avoid added sugars and how to make their own muesli.Store Trail
Find your Healthy Eating topic resources here
Ways to eat Marmite
Marmite, B vitamins and the First World War
How are cans made?
What are spices & where do they come from?
Why do carrots grow in the UK?
Why are carrots good for us?
Root veg around the world |
1. The grammar to get is Smyth. It's very long and has more detail than you'll probably ever need, which makes it the ultimate reference grammar. For quick reference, something like the Oxford Greek Grammar will do, but it offers nowhere near the breadth of Smyth.
2. The biggest difference between the different dialects is essentially spelling. Imagine that English speakers spelled every word exactly as they pronounced it. Now, imagine what the differences would be between British, American and Australian spelling - this is about what the difference between the different Greek dialects. There are also a few inflectional endings that are a little different, but nothing that can't be learnt fairly easily.
The major dialects you will need to know are Attic (in which are written the tragic and comic plays, most philosophical works, and mostly everything written after the fifth century), Doric (which playwrights use to write choral passages), Ionic (in which Herodotus wrote his historical work, and of which Homeric Greek is a variation), and Aeloic (in which a few poetic works are written).
New Testament Greek, aka Koine Greek, is a simplified version of Attic Greek. Athens was so prominent in the literary world that after the fifth century, all Greek was modeled after it. Learn Attic Greek, read some Plato and some Xenophon, and marvel at how much easier New Testament Greek is. The forms are simplified, and irregularities are smoothed out.
I learned Greek with Mastronarde, and I liked how thorough it was, but many of my colleagues disagree. I would recommend it or Hansen and Quinn, and I would recommend taking plenty of time to do each lesson in full, memorize all forms completely, and know as much vocabulary as you can cram in your head. If you already know Latin, you'll be surprised at how many more verbal forms Greek has, and at how much larger its vocabulary is. But it's a worthwhile pursuit! |
Electromagnetic waves are energy transported through space in the form of periodic disturbances of electric and magnetic fields. All electromagnetic waves travel through
space at the same speed, c = 2.99792458 x 108 m/s, commonly known as the
speed of light. An electromagnetic wave is characterized by a frequency
and a wavelength. These two quantities are related to the speed of light by the
speed of light = frequency x wavelength
The frequency (and hence, the wavelength) of an electromagnetic wave depends on its
source. There is a wide range of frequency encountered in our physical world, ranging from
the low frequency of the electric waves generated by the power transmission lines to the
very high frequency of the gamma rays originating from the atomic nuclei. This wide
frequency range of electromagnetic waves constitute the Electromagnetic
The Electromagnetic Spectrum
The electromagnetic spectrum can be divided into several wavelength (frequency) regions, among which only a narrow band from about 400 to 700 nm is visible to the human eyes. Note that there is no sharp boundary between these regions. The boundaries shown in the above figures are approximate and there are overlaps between two adjacent regions.
Wavelength units: 1 mm = 1000 µm; 1 µm = 1000 nm.
According to quantum physics, the energy of an electromagnetic wave is quantized, i.e. it can only exist in discrete amount. The basic unit of energy for an electromagnetic wave is called a photon. The energy E of a photon is proportional to the wave frequency f,
E = h f
where the constant of proportionality h is the Planck's Constant,
h = 6.626 x 10-34 J s.
Go to Main Index |
The Soviet Union was the first totalitarian state to establish itself after World War One. In 1917, Vladimir Lenin seized power in the Russian Revolution, establishing a single-party dictatorship under the Bolsheviks. After suffering a series of strokes, Lenin died on January 21, 1924, with no clear path of succession. The obvious choice, to many, was Leon Trotsky, who had headed the Military Revolutionary Committee that had carried out the Bolshevik Revolution. He had been a high-ranking member of the party throughout Lenin's time in power, and was considered by many to be the Communist Party's foremost Marxist theorist, but was also considered aloof and cold by many party members.
Trotsky's main competition for power was Joseph Stalin. Stalin had been involved in the Communist Party since before the Revolution. He served under Lenin as commissar for nationalities, and in 1923 became general secretary of the party. Lenin supported Trotsky over Stalin as his successor, claiming Stalin was "too rude" to lead the government. However, Stalin's position as general secretary allowed him to manipulate the party structure and place his supporters in crucial positions throughout the party, ultimately insuring his victory.
During the struggle for power an ideological rift began to open between Trotsky and Stalin. Trotsky advocated 'permanent world revolution,' claiming that the Soviet Union should strive continuously to encourage proletarian revolutions throughout the world. Stalin contrasted Trotsky's view with a 'socialism in one country' message, which stressed the consolidation of the communist regime within the Soviet Union, and concentration on domestic developments and improvements before looking to world revolution. This rift, combined with Stalin's rise to power as party leader, sealed Trotsky's fate. By 1927, Trotsky had lost his position on the Central Committee, and was expelled from the party. He fled to Turkey, and eventually to Mexico, where he was killed in 1940 by a Stalinist agent.
His main opposition gone, Stalin consolidated power, demonstrating his independence. In 1928 he abandoned Lenin's economic policy and installed a system of central planning, which dictated everything from where factories should be built to how farmers should plant their crops. He allocated natural resources for heavy industrial development, at the expense of consumer products, believing that heavy industry would be the foundation of the profitable state. Simultaneously, Stalin introduced a policy of collectivization, under which were created governmentally owned and operated farms in which peasants pooled their lands. The more well off peasant class, the kulaks, rebelled against collectivization. Stalin would accept no resistance, and initiated a reign of terror during 1929 and 1930, during which as many as 3 million were killed.
During the 1930s, Stalin sought to eliminate all barriers to his complete and total exercise of power. In 1933, he created the Central Purge Commission, which publicly investigated and tried members of the Communist Party for treason. In 1933 and 1934, 1,140,000 members were expelled from the party. Between 1933 and 1938, thousands were arrested and expelled, or shot, including about 25 percent of the army officer corps. 1108 of the 1966 delegates attending the 1934 Communist Party Congress were arrested, and of the 139 members of the Central Committee, 98 were shot. Many longstanding and prominent party members were tried. In all cases, the defendants were forced to confess publicly, and then were shot.
Historians disagree over whether or not totalitarianism is an inherent aspect of Marxist-Leninist theory, or whether Joseph Stalin, as many claim, deviated from the true tenets of Marxism-Leninism in constructing his government. Most can agree, however, that the Marxist idea of "dictatorship of the proletariat" enabled the rise of the totalitarian state. Whether or not there was an aspect of totalitarianism inherent in Lenin's philosophy, he never consolidated power to the same extent as Stalin did. Indeed, upon his deathbed, dictating his last testament, Lenin decried the dictatorial nature of his government and expressed the fear that in the wrong hands, totalitarianism could be used in a manner antagonistic to the masses, for which the government was intended to work.
Despite these misgivings, Lenin's rule no doubt set the stage for Stalin's complete totalitarianism. Though his publicly stated philosophy was government by local councils, called soviets, true power rested securely in the hands of the Central Committee alone. The party controlled the police (official and secret), the army, and the bureaucracy. Stalin capitalized on this power to a much greater extent after coming to power.
Lenin had some sense that this might happen, and expressed his doubts in his 'political testament.' Both candidates to succeed him had impressive histories and credentials. However, Lenin expressed doubts about Stalin, fearing he would abuse the power concentrated in his hands. Though he clearly preferred Trotsky, and praised him as "the most able man in the present Central Committee," he expressed reservations about Trotsky's overconfident nature, and thought that perhaps Trotsky was too interested in the administrative side of government to be an effective practical leader.
The success of Stalin's 'communism in one country' philosophy was both the result of, and a cause for, the spirit of nationalism, which was prominent in many of the nations of Europe following the First World War. Destroyed through interactions with the other nations of the continent, many nations chose to recede from international affairs and concentrate on reversing the demoralizing effects of the war. Though Stalin would have been hard-pressed to convince the Soviet people that he could lead communism in the eradication of all of the problems of the world, he did a fair job of convincing them that under his leadership, communism could address the problems of his country, which when it had grown in strength, could then effect global change. This type of moral argument for nationalism was typical of the political leaders of the inter-war period. This nationalism translated easily into many facets of totalitarianism, including the elimination of dissent, the demand for uniformity, and the destruction of individualism as the individual was overshadowed by the united nation.
Stalin's economic policies enjoyed only limited success. Industrialization proved to be a somewhat effective policy, though it proceeded along a different path and schedule than Stalin had planned. In any case, under Stalin the Soviet Union made many advances in technology and heavy industry, and the country benefited from these. However, agricultural policies never achieved the goal of self-sufficiency, and the Soviet Union continued to import crops and heavily subsidize agriculture. Doubtless, the slaughter of 3 million kulaks helped the situation very little. However, Stalin's main focus during the 1930s was consolidating power and eliminating rivals, two tasks at which he proved greatly successful. |
In this math benchmark test you will solve one and two-step inequalities and also word problems involving inequalities.
This test gives you an opportunity to work with inequalities for practice and reinforcement of math skills.
This test is based on the following Common Core Standards:
Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to determine whether a given number in a specified set makes an equation or inequality true. |
Structural Biochemistry/Protein function/Myoglobin's Oxygen Binding Curve
The oxygen binding curve for Myoglobin forms an asymptotic shape, which shows a simple graph that rises sharply then levels off as it reaches the maximum saturation. The half-saturation, the point at which half of the myoglobin is binded to oxygen, is reached at 2 torr which is relatively low compared to 26 torr for hemoglobin.
Myoglobin has a strong affinity for oxygen when it is in the lungs, and where the pressure is around 100 torr. When it reaches the tissues, where it's around 20 torr, the affinity for oxygen is still quite high. This makes myoglobin less efficient of an oxygen transporter than hemoglobin, which loses it's affinity for oxygen as the pressure goes down and releases the oxygen into the tissues. Myoglobin's strong affinity for oxygen means that it keeps the oxygen binded to itself instead of releasing it into the tissues. |
A personification poem is a poem that bestows human-like qualities and emotions on either inhuman or inanimate objects, often in order to create symbolism and allegory. Many poets have used personification in their work, one such example being "Mirror" by Sylvia Plath.Continue Reading
Personification is not limited to poetry alone, and often appears in prose writing as well. Book 3 of "Paradise Lost"' by John Milton contains one such example of personification: "Earth felt the wound; and Nature from her seat, Sighing, through all her works, gave signs of woe."
Personification can be applied to almost anything that is not human. This could be an animal, object or even an abstraction of some kind. There are many examples of personification being used as allegory. The virtue of justice, for example, is given the form of a knight in Edmund Spenser's "The Faerie Queene."
In addition to allegory and symbolism, personification in poetry in particular is often used to help enhance mood and tone, or to create enhanced emphasis on certain meditations or images in the piece.
Personification is often used in everyday language, whenever anything non-human is attributed with human qualities. "The car will not start, it is not feeling well," is an example of personification.Learn more about Poetry |
NAGC Programming Standard 3.5. addresses culturally relevant curriculum. Specifically, students with gifts and talents need to develop knowledge and skills for living and being productive in a multicultural, diverse, and global society.
Teaching Tolerance’s Perspectives for a Diverse America is a literacy-based curriculum that marries anti-bias social justice content with the rigor of the Common Core State Standards. Using the concept of “windows and mirrors,” Perspectives helps young people learn about themselves and others. The text anthology reflects diverse identities and experiences. The Integrated Learning Plan provides options for differentiation and for kids to take action to create a more socially just world. Visit http://perspectives.tolerance.org/
Tools to Use Today
Note: WATG neither endorses nor recommends specific products and programs. This column is for informational purposes only. |
Most of these children come from homes where there are few or no books in the home. Parents of these students rarely have the time or ability to read to their children.
Reading aloud builds vocabulary as they begin to learn the words in context as they hear them.
Reading aloud builds an appreciation of literature and the wonderful world of books.
Reading aloud builds listening skills.
Reading aloud to a child creates a special recreational bond that many children will not have experienced before.
Reading aloud is, according to the landmark 1985 report “Becoming a Nation of Readers,” “the single most important activity for building the knowledge required for eventual success in reading.”
More information: http://www.greatschools.org/gk/articles/read-aloud-to-children/
Developing that passion for reading is crucial, according to Jim Trelease, author of the best-seller, “The Read-Aloud Handbook.” “Every time we read to a child, we’re sending a ‘pleasure’ message to the child’s brain,” he writes in the “Handbook.” “You could even call it a commercial, conditioning the child to associate books and print with pleasure.” |
Extracts from this essay...
Geography - Atmospheric Systems Essay a) Outline the characteristics of urban heat islands. (5) First and foremost the urban heat island can be defined as an effect whereby inner city areas tend to have higher mean annual and also higher winter minimum temperatures than the surrounding rural areas. The differences in heat usually become more substantial towards the centre of the urban areas. Heat is given off by factories, vehicles and home, all of which burn fuel and produce heat which aids the urban heat island effect. Also urban surfaces, such as concrete and tarmac, absorb substantial amounts of solar radiation before releasing it during the night. This occurrence therefore attributes to the much higher night time temperatures found in the centres of urban areas. Smog and pollutants found in the urban areas form a pollution zone which allows short wave insolation to enter, however the smog consequently traps the outgoing terrestrial radiation as this is of a longer wavelength.
The release of heat from buildings is slow; consequently changes in urban temperatures often lag behind seasonal patterns. The wind is another key factor which has an influence on the extent of the urban microclimate. Usually built up areas tend to have lower wind speeds than surrounding areas, this is because taller buildings provide a frictional drag on air movements. Consequently, wind turbulence is created, providing rapid changes in both its direction and speed. Therefore a general decrease in wind speed occurs as the air travels to the city centre from the suburbs. In rare cases, the urban heat island effect may actually alter the local wind patterns completely. Indeed a low pressure area may develop as warm air rises over the urban area. As a result winds in urban areas tend to be 20-30% lower than average. However planners must consider the importance of a well designed urban area, as efficient air flow is essential so that damaging pollutants can be dispersed.
On the other hand, thunderstorms are much more common in built up areas, due to the intense convection that can occur, particularly during hot summer evenings. An interesting example of an influential factor that specifically affects an urban heat island is the urban rainfall in Manchester - a city in northwest England. Recent research undertaken by a local university has suggested that the erection of a band of high rise tower blocks in Manchester during the 1970s has brought more rain to certain parts of the city. The level of rainfall has increased by approximately 7% over recent decades. This has occurred as a consequence of the turbulence made by the micro scale effect of tall buildings, which forces the air to rise. Also the heat island effect means that the temperature can be up to 8°C higher in the centre of the city in comparison to the surrounding countryside. This contributes further to the above average rainfall, by causing the air to rise further as a result of convection. ?? ?? ?? ?? Alex Potter 6N2 23rd March 2009
Found what you're looking for?
- Start learning 29% faster today
- Over 150,000 essays available
- Just £6.99 a month
- Over 180,000 student essays
- Every subject and level covered
- Thousands of essays marked by teachers |
Although nanomedicine may sound like something out of a science fiction film, it is already being put to use in treating a range of human illnesses. Magnetic nanoparticles are currently in development as a promising new type of cancer treatment that uses nanoparticles to selectively heat tumors to temperatures high enough to kill cancer cells without harming healthy cells.1 This destroys tumors as well as activates the immune system to attack other cancer cells throughout the body.
This intriguing new treatment is called magnetic-mediated hyperthermia (MMH). Hyperthermia, which means high temperature, has been proposed throughout history as a way to treat many diseases. Hippocrates (a Greek physician known as the father of western medicine – the Hippocratic Oath is named after him) said, “those diseases which medicines do not cure, the knife cures; those which the knife cannot cure, fire cures; those which fire cannot cure, are to be reckoned wholly incurable.” The problem, though, is that cells must be heated to 43 °C (109 °F) to be destroyed.1 Of course, since our normal body temperature is 98.6 °F, a human body cannot be heated that much without undesirable consequences. That means that in order to put hyperthermia to use, doctors must be able to selectively heat only the cells they want to kill, while leaving all the other parts of the body untouched.
This is where the nanoparticles come in.
There is a type of magnetic fluid, known as a ferrofluid, that is made from iron oxide particles (commonly magnetite2) that are less than 100 nm in size. These ferrofluids are called superparamagnetic which means that they are not magnetized until an external magnetic field is applied to them. A ferrofluid looks like a regular liquid until it is brought close to a magnet, when it suddenly organizes into unique peaks and valleys (see Figure 1 above).
Due to the nanoparticles’ size, ferrofluid can be injected directly into cancer cells and will spread through a tumor without dispersing widely around the body. Then, an alternating magnetic field is applied to the patient. This alternating magnetic field creates small currents running through the ferrofluid, and these small currents give off heat due to resistance. Think of your phone charger heating up when in use – this is a similar idea except the heat is used as energy rather than wasted heating up the air or furniture around your charger.
At a certain temperature called the Curie Point the ferrofluid becomes disordered again, and stops heating up. This phenomenon makes the nanoparticles self-regulating: the hyperthermia essentially turns itself off when the nanoparticles reach their Curie Point. By using materials designed with Curie Points at a safe maximum temperature (122 – 158 °F) doctors don’t have to closely monitor the patient’s internal temperature throughout the treatment.1
Magnetic-mediated hyperthermia can be used to treat local tumors, but it also ramps up the immune system to find and destroy distant cells. This is because cancer cells produce more heat shock proteins (HSPs) than normal cells. HSPs help repair damaged proteins when cells are exposed to high heat or toxins. When cancer cells are heated to high temperatures from the magnetic-mediated hyperthermia, they produce HSPs in high quantities. The HSPs in turn bind to antigens (molecules or proteins that trigger an immune response). When some cells are destroyed by the heat, their HSPs and antigens are released into the body. That then attracts cells from the immune system, which interact with the HSP-antigen complex and use them to hunt down other cancer cells that were not damaged by the initial heating.2
Nanoparticles can also be used to combine hyperthermia with other cancer treatments like chemotherapy. Small molecules called ligands can coat a nanoparticle’s surface to selectively target receptors or enzymes in certain cancer cells. Ligands improve the stability of magnetic nanoparticles in biological materials and can also be used to direct chemotherapy drugs into tumors. For instance, negatively charged magnetic nanoparticles and positively charged cisplatin molecules (a chemotherapy drug) are attracted to each other to form a nanoparticle-drug complex. Once this nanoparticle-drug complex is targeted to cancer cells, an alternating magnetic field is applied. The heat produced both kills the cancer cells and releases the drugs from the nanoparticles directly inside the cancer cells! Studies show that hyperthermia and chemotherapy are significantly more effective in combination than when used separately.1
The really exciting part of the MMH treatment is that it has been shown to be effective with only minor side effects. In animal studies as well as phase I and II trials in humans, MMH has killed tumors in types of cancer that are typically extremely difficult to treat, such as glioblastoma (a type of brain cancer), with only minor side effects. The skin next to tumors is slightly warmed during treatment, but otherwise, many types of magnetic nanoparticles have low toxicity.3 Also, chemotherapy drugs directed into cancer cells by nanoparticles do less damage to healthy cells.1 Overall, the potential benefits of MMH are judged to outweigh the side effects, and MMH is an acceptable treatment according to medical ethical considerations.3
Further research is needed on the effects of applying magnetic fields to humans as well as ways to use fewer nanoparticles while reaching higher temperatures.4 Some materials have higher heating capacity than others, which is good as long as the materials are also nontoxic and biocompatible. Despite these hurdles, MMH has been approved for use in Germany to treat brain cancer under the name MagForce NanoTherm™ therapy.1 This new technology shows great promise in cancer treatment especially when combined with treatments already in use.
- UW-Madison Materials Research Science and Engineering Center: Nanomedicine: Problem-Solving to Treat Cancer (middle school)
- Science Buddies: “Can Nanotechnology Help Us Clean Up Oil Spills…?” ferrofluid activity
- Etheridge, M. L. Understanding the Benefits and Limitations of Magnetic Nanoparticle Heating for Improved Applications in Cancer Hyperthermia and Biomaterial Cryopreservation. Dissertation, University of Minnesota – Twin Cities, 2013.
- Kobayashi, T., Ito, A., & Honda, H. Magnetic Nanoparticle-Mediated Hyperthermia and Induction of Anti-Tumor Immune Responses. Chapter in Hyperthermic Oncology from Bench to Bedside, pp. 137-150. Springer Singapore, 2016. doi: 10.1007/978-981-10-0719-4_13
- Müller, S. Magnetic fluid hyperthermia therapy for malignant brain tumors—an ethical discussion. Nanomedicine: Nanotechnology, Biology and Medicine, 2009, 5 (4) 387–393. doi: 10.1016/j.nano.2009.01.011
- Zhao, L.-Y. et al. Magnetic-mediated hyperthermia for cancer treatment:
Research progress and clinical trials. Chinese Physics B, 2013, 22 (10) 108104. doi: 10.1088/1674-1056/22/10/108104 |
|Description||RES, 120E 1% 1/2W-S|
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat may be used as part of motor controls, in power distribution systems, or as test loads for generators. Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force, or chemical activity. Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds and forms. Resistors are also implemented within integrated circuits. The electrical function of a resistor is specified by its resistance: common commercial resistors are manufactured over a range of more than nine orders of magnitude. The nominal value of the resistance falls within the manufacturing tolerance, indicated on the component.
Order Royalohm M27S2FF1200T20 at eXcessportal.com. Check stock and pricing, view product specifications, and order online. |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2006 February 27
Explanation: What is it? Something is happening in a small portion of the sky toward the constellation of Aries. Telescopes around the globe are tracking an unusual transient there as it changes day by day. No one is sure what it will do next. The entire space mystery began on February 18 when the Earth-orbiting robot Swift satellite noticed an unusual transient began to glow dimly in gamma rays. Dubbed GRB 060218, the object is a type of gamma ray burst (GRB) but the way its brightness changes is very unusual. Since detection, GRB 060218 has been found to emit light across the electromagnetic spectrum, including radio waves and visible light. Pictured above, the Sloan Digital Sky Survey (SDSS) image of the field of GRB 060218 well prior to its Swift trigger is shown on the left, while the same field, taken by the orbiting Swift satellites' ultraviolet telescope after the Swift trigger, is shown on the right. The oddball GRB is visible in the center of the right image. Subsequent observations found a redshift for the transient of z=0.033, showing it to be only about 440 million light years away, relatively nearby compared to typical GRBs. Whether GRB 060218 represents a new type of gamma ray burst, a new type of supernova, or an unusual link between the GRBs and supernovas has become an instant topic of research.
Authors & editors:
NASA Web Site Statements, Warnings, and Disclaimers
NASA Official: Jay Norris. Specific rights apply.
A service of: EUD at NASA / GSFC
& Michigan Tech. U. |
Veterinarians have made huge steps in educating dog owners about the importance of heartworm prevention. Unfortunately, the same can’t be said for cat owners. Despite the fact that the first documented case of a cat with heartworms dates back to 1921 in Brazil, neither veterinarians nor cat caretakers have thought of heartworm disease as a feline problem. Although veterinarians started becoming aware of the problem in the late 1980s, widespread understanding has lagged.
When award-winning cat writer Dusty Rainbolt‘s cat, Nixie, began having odd respiratory symptoms that were eventually diagnosed as heartworm-related, Dusty began her own campaign to raise awareness among the general public about feline heartworm through Nixie’s Facebook page, Nixie’s HEARTworm BEAT. Her work also inspired this post. Here’s what you need to know about heartworms and cats.
1. Indoor cats can get heartworms
Heartworms are transmitted by mosquitoes, and if you’ve ever been plagued by the little buggers, you know they can get inside even if you have screens, never open your windows, and take every possible precaution. Thus, even indoor cats are at risk. The bite of an infected mosquito releases heartworm larvae into a cat’s blood, after which they begin their life cycle.
2. Heartworms don’t survive as well in cats as they do in dogs
Cats typically have fewer and smaller heartworms than dogs. In addition, the parasites only survive two or three years in cats, as opposed to five to seven years in dogs. The percentage of larvae that mature into adults in cats maxes out at about 25 percent, whereas 40 to 90 percent of larvae in dogs survive to adulthood. "So, what’s the big deal?" you may ask.
3. Cats can die from heartworm infection
If heartworms grow to adulthood, cats can have acute symptoms including heart rhythm problems, blindness, convulsions, diarrhea, vomiting, and even sudden death.
4. Heartworms don’t have to be adults to cause severe damage
Even heartworm larvae can cause severe lung damage in cats. The first round of infection, when the larvae first make their way into the cat’s lungs, can cause a major inflammation response, resulting in symptoms that are usually misdiagnosed as asthma or allergic bronchitis.
5. Heartworm-inflicted lung damage is permanent
Veterinarians now call the array of symptoms associated with heartworm infections in cats Heartworm Associated Respiratory Disease (HARD). Cats with HARD will have periodic episodes of coughing, difficulty breathing, fainting, rapid heartbeat, and even convulsions or collapse, for the rest of their lives.
6. Diagnosis requires multiple tools
The main tests used to detect the presence of heartworms are an antigen test and an antibody test, and both of these have limitations. Antigen tests only detect adult female or dying male worms, so immature worms or male-only infections won’t be detected. Antibody tests detect the body’s immune response to the parasites, and will detect infection earlier than the antigen tests. X-rays and ultrasound imaging are used to confirm infection.
7. There are no drugs approved for heartworm treatment in cats
Some infected cats can fight off the infection and never have problems as a result. Others will need ongoing treatment, including steroids to treat lung inflammation, oxygen or fluid therapy, inhalers, and antibiotics. If the heartworms are preventing blood flow to vital organs or interfering with heart function, surgery can be performed to remove them — but it’s a very delicate and risky procedure.
8. Heartworm preventives, however, have been approved for cats
According to the American Heartworm Society, the FDA has approved four heartworm preventive products for use in cats: the oral medications Heartgard for Cats from Merial and Interceptor from Novartis; and the topical products Revolution from Pfizer and Advantage Multi for Cats. All of these can prevent development of adult heartworms if they’re used properly. Cats should be tested for antigens and antibodies before using heartworm preventives.
For more information about heartworm in cats, visit the American Heartworm Society and Know Heartworms websites. You can also join Rainbolt in her quest to raise awareness about feline heartworm disease by checking out Nixie’s HEARTworm BEAT on Facebook.
Do you have a cat with heartworms or HARD? What has it been like to live with one of these special kitties? What do you wish other cat caretakers knew about heartworms and the danger to cats? Please share your thoughts in the comments.
Read more about heartworms and cats: |
On Earth right now, there are about 10 trillion gigabytes of digital data, and every day, humans produce emails, photos, tweets, and other digital files that add up to another 2.5 million gigabytes of data. Much of this data is stored in enormous facilities known as exabyte data centers (an exabyte is 1 billion gigabytes), which can be the size of several football fields and cost around $1 billion to build and maintain.
Many scientists believe that an alternative solution lies in the molecule that contains our genetic information: DNA, which evolved to store massive quantities of information at very high density. A coffee mug full of DNA could theoretically store all of the world’s data, says Mark Bathe, an MIT professor of biological engineering.
“We need new solutions for storing these massive amounts of data that the world is accumulating, especially the archival data,” says Bathe, who is also an associate member of the Broad Institute of MIT and Harvard. “DNA is a thousandfold denser than even flash memory, and another property that’s interesting is that once you make the DNA polymer, it doesn’t consume any energy. You can write the DNA and then store it forever.”
Scientists have already demonstrated that they can encode images and pages of text as DNA. However, an easy way to pick out the desired file from a mixture of many pieces of DNA will also be needed. Bathe and his colleagues have now demonstrated one way to do that, by encapsulating each data file into a 6-micrometer particle of silica, which is labeled with short DNA sequences that reveal the contents.
Using this approach, the researchers demonstrated that they could accurately pull out individual images stored as DNA sequences from a set of 20 images. Given the number of possible labels that could be used, this approach could scale up to 1020 files.
Bathe is the senior author of the study, which appears today in Nature Materials. The lead authors of the paper are MIT senior postdoc James Banal, former MIT research associate Tyson Shepherd, and MIT graduate student Joseph Berleant.
Digital storage systems encode text, photos, or any other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.
DNA has several other features that make it desirable as a storage medium: It is extremely stable, and it is fairly easy (but expensive) to synthesize and sequence. Also, because of its high density — each nucleotide, equivalent to up to two bits, is about 1 cubic nanometer — an exabyte of data stored as DNA could fit in the palm of your hand.
One obstacle to this kind of data storage is the cost of synthesizing such large amounts of DNA. Currently it would cost $1 trillion to write one petabyte of data (1 million gigabytes). To become competitive with magnetic tape, which is often used to store archival data, Bathe estimates that the cost of DNA synthesis would need to drop by about six orders of magnitude. Bathe says he anticipates that will happen within a decade or two, similar to how the cost of storing information on flash drives has dropped dramatically over the past couple of decades.
Aside from the cost, the other major bottleneck in using DNA to store data is the difficulty in picking out the file you want from all the others.
“Assuming that the technologies for writing DNA get to a point where it’s cost-effective to write an exabyte or zettabyte of data in DNA, then what? You're going to have a pile of DNA, which is a gazillion files, images or movies and other stuff, and you need to find the one picture or movie you’re looking for,” Bathe says. “It’s like trying to find a needle in a haystack.”
Currently, DNA files are conventionally retrieved using PCR (polymerase chain reaction). Each DNA data file includes a sequence that binds to a particular PCR primer. To pull out a specific file, that primer is added to the sample to find and amplify the desired sequence. However, one drawback to this approach is that there can be crosstalk between the primer and off-target DNA sequences, leading unwanted files to be pulled out. Also, the PCR retrieval process requires enzymes and ends up consuming most of the DNA that was in the pool.
“You’re kind of burning the haystack to find the needle, because all the other DNA is not getting amplified and you’re basically throwing it away,” Bathe says.
As an alternative approach, the MIT team developed a new retrieval technique that involves encapsulating each DNA file into a small silica particle. Each capsule is labeled with single-stranded DNA “barcodes” that correspond to the contents of the file. To demonstrate this approach in a cost-effective manner, the researchers encoded 20 different images into pieces of DNA about 3,000 nucleotides long, which is equivalent to about 100 bytes. (They also showed that the capsules could fit DNA files up to a gigabyte in size.)
Each file was labeled with barcodes corresponding to labels such as “cat” or “airplane.” When the researchers want to pull out a specific image, they remove a sample of the DNA and add primers that correspond to the labels they’re looking for — for example, “cat,” “orange,” and “wild” for an image of a tiger, or “cat,” “orange,” and “domestic” for a housecat.
The primers are labeled with fluorescent or magnetic particles, making it easy to pull out and identify any matches from the sample. This allows the desired file to be removed while leaving the rest of the DNA intact to be put back into storage. Their retrieval process allows Boolean logic statements such as “president AND 18th century” to generate George Washington as a result, similar to what is retrieved with a Google image search.
“At the current state of our proof-of-concept, we’re at the 1 kilobyte per second search rate. Our file system’s search rate is determined by the data size per capsule, which is currently limited by the prohibitive cost to write even 100 megabytes worth of data on DNA, and the number of sorters we can use in parallel. If DNA synthesis becomes cheap enough, we would be able to maximize the data size we can store per file with our approach,” Banal says.
For their barcodes, the researchers used single-stranded DNA sequences from a library of 100,000sequences, each about 25 nucleotides long, developed by Stephen Elledge, a professor of genetics and medicine at Harvard Medical School. If you put two of these labels on each file, you can uniquely label 1010 (10 billion) different files, and with four labels on each, you can uniquely label 1020 files.
George Church, a professor of genetics at Harvard Medical School, describes the technique as “a giant leap for knowledge management and search tech.”
“The rapid progress in writing, copying, reading, and low-energy archival data storage in DNA form has left poorly explored opportunities for precise retrieval of data files from huge (1021 byte, zetta-scale) databases,” says Church, who was not involved in the study. “The new study spectacularly addresses this using a completely independent outer layer of DNA and leveraging different properties of DNA (hybridization rather than sequencing), and moreover, using existing instruments and chemistries.”
Bathe envisions that this kind of DNA encapsulation could be useful for storing “cold” data, that is, data that is kept in an archive and not accessed very often. His lab is spinning out a startup, Cache DNA, that is now developing technology for long-term storage of DNA, both for DNA data storage in the long-term, and clinical and other preexisting DNA samples in the near-term.
“While it may be a while before DNA is viable as a data storage medium, there already exists a pressing need today for low-cost, massive storage solutions for preexisting DNA and RNA samples from Covid-19 testing, human genomic sequencing, and other areas of genomics,” Bathe says.
The research was funded by the Office of Naval Research, the National Science Foundation, and the U.S. Army Research Office. |
Liberia, the “Land of the Free”, was established as a homeland for freed African-American slaves in the 19th century and was the first African country to gain independence. The coastal country is characterized by humid, tropical climate with mean rainfall ranging from 2,000 mm farthest inland to over 5,000 mm at the coast. Liberia contains the largest part (50 percent) of the remaining Upper Guinean rain forest in West Africa, which is an important hotspot of global biodiversity. Liberia’s forests contain approximately 225 timber species and are home to a rich diversity of mammals, birds, reptiles and insects (CIFOR, 2005). According to recent assessments (FAO, 2014), less than 5 percent of Liberia’s forests are considered primary forests (no clearly visible indications of human activity); the vast majority are regenerated forests (native species, but with indication of human activity). While Liberia’s forests are recognized as a top conservation priority in the entire region, there are currently only two actively protected areas — Sapo National Park and the East Nimba Nature Reserve — and eight forest reserves. This conservation goal, however, competes with extractive economic activities, such as mining and logging, which account for large portions of Liberia’s export income. |
Use these tips and activities to link UN sustainable development goal 6 to separation techniques, potable water and more
Water features in chemistry teaching across the secondary age range and Goal 6, ensure availability and sustainable management of water and sanitation for all, aligns with many water chemistry topics. You can easily fit the goal into your teaching of separation techniques, sustainability of resources, potable water and the treatment of wastewater.
Many students don’t give water a second thought. They turn on a tap and water gushes out. In Northern Europe, where rainfall is frequent and lakes and rivers are abundant, water is an underappreciated resource. Our students may not grasp the importance of water as a resource and may take water science for granted. Introducing a discussion of Goal 6 when you teach a water topic shows students just how relevant water chemistry is to the human population.
Class practical, for age ranges 14–16 and 16–18
A class practical using a microbiology experiment to investigate the antibacterial properties of the halogens. Use it with your 14–16 students to develop water purification practical work; add the extension questions to use the investigation for the 16–18 age group.
Download the worksheet, teacher and technician notes from the Education in Chemistry website: rsc.li/xxxxxxx
Put it in context
It’s important to show students that chemistry has a key role to play in all aspects of Goal 6: ensuring safe, sustainable and affordable drinking water for all requires management of natural resources and water-related ecosystems, development of water technologies and management of wastewater – all of which involve chemists.
Achieving Goal 6 in the desired timeframe isn’t a given. This is an essential point to share with your students. The UN considers this goal to be badly off track and its latest report makes for depressing reading. In 2017, 40% of the world’s population didn’t have access to handwashing facilities with soap – a statistic made all the more sobering by the coronavirus pandemic.
- Create context with scientific research using this starter slide with questions: How ancient Maya peoples made potable water.
- Link to careers with this video profile of a laboratory analyst and higher degree apprentice who tests drinking water for 15 million people.
- Develop literacy and trigger discussion with these differentiated DARTs (Directed Activities Related to Text) based on waste water treatment.
- Precious water offers lots more classroom resources with links to off-grid water treatment technologies.
Water is a vital resource, needed for all aspects of life. Students will be familiar with the obvious uses – drinking, cooking and washing – but you might need to point out other uses, such as manufacturing. Much of our water consumption is hidden in these other uses. For example, it takes around 50 litres of water to produce 1 kg of plastic. So it takes twice as much water to produce a plastic water bottle as the amount of water contained in the bottle.
The future impact of insecure water supplies in other countries could be huge. Water could become a source of conflict between countries and it is already a push factor for migration. As citizens of high-income countries, we cannot ignore this goal because we feel it doesn’t directly affect us.
Put it into practice
You can use Goal 6 to add context and relevance to your teaching of potable water and separation techniques. Before you begin teaching the pure chemistry, introduce the topic’s significance with the infographics from the UN’s useful site dedicated to the SDGs. Adding slides to an existing presentation is a simple way to integrate this into your teaching. For example, the Goal 6 infographic would work well as the title slide for a presentation on drinking water production or desalination for the 14–16 age range.
Working with data is a key skill across all stages and qualifications in chemistry. The UN Water SDG 6 data portal is an excellent source of data on progress in this area and is easily navigable. Use the data to construct questions for students to practise writing longer responses involving comparisons. You can also use the portal to introduce the impact of missing data on our understanding of the world.
The Eco Schools programme offers a way to engage with students at the wider school level. Water is one of the ten topics in the programme and encourages student teams to develop initiatives for saving water at school. The programme also looks at global citizenship, which would help students to connect local action with international impact.
Practical activities can help students make links between real-life contexts like Goal 6 and pure chemistry. A great starting point is the classic activity of getting clean water from a contaminated water source, suitable for the 14–16 age group. You can extend this activity with a discussion on the presence of microorganisms which would not be removed by filtration. The download accompanying this article offers a further investigation to explore the antibacterial nature of the halogens.
Help students connect their thinking about water with their local context. Local water board websites can be excellent sources of information. For example, Thames Water is the UK’s largest water and wastewater services company, supplying 2.6 billion litres of drinking water every day and its education website offers a wide range of learning materials for all stages of education. You can send students to this reliable website to research a water topic or use the well-explained home experiments to add variety to homework.
Check out the rest of the Sustainability in chemistry series. |
A new study claims earthquakes and volcanoes are responsible for the diverse nature of the ocean’s coral reefs. With this information, scientists are now becoming even more worried about global warming. If these monumental geological events play a role in coral creation, it may be even more difficult to replace any reef lost to climate change and rising sea temperatures. This study, conducted by scientists from the ARC Centre of Excellence for Coral Reef Studies (CoECRS), is published in the journal Proceedings of the Royal Society B.Lead author Dr. Sally Keith of CoECRS and James Cook University says geological events and the general unease beneath the ocean floor is responsible for the wide variety of coral reefs seen in the Earth’s oceans. This also explains why some species of coral are more prevalent than others.
“There are many theories to explain how coral reefs came to be,” said Dr. Keith in a press statement.
“Traditionally scientists have tested these theories by looking at where species occur. We used a fresh approach that focused on where species stopped occurring and why.”
Dr. Keith and her colleagues were shocked when they analyzed the results of their study. Though it had previously been understood that coral species were at the mercy of environmental factors such as temperature and habitat, the results suggested that geological events from millions of years ago actually determined what species of coral grew and where. The gradual shifting of tectonic plates over millions of years is responsible for creating a diverse species of coral, says Dr. Keith.
“For example, Hawaii is a chain of volcanic islands that has formed as a tectonic plate moves over a ‘hotspot’ of molten rock. The rock repeatedly punches through the Earth’s crust as lava, producing volcanoes that jut out above the ocean surface, eventually forming a chain of volcanic islands,” explains Dr. Keith.
“Over time, corals spread across the island chain using the islands as ‘stepping stones’, while at the same time they (remained) isolated from the rest of the Pacific. As a result, a distinct set of Hawaiian coral reefs arises.”
Additionally, the team also found that older species of coral are better equipped to expand into new territories.
Volcanoes have been known to play a role in the formation of certain types of coral. For instance, coral that grows around a volcanic island as it sinks below the sea surface is called an atoll. These atolls can take millions of years to fully form, a fact which co-author Professor Sean Connolly says should be something to consider when discussing climate change.
“Climate change is leading to the loss of corals throughout the tropics. This study has shown that the diversity of corals we see today is the result of geological processes that occur over millions, even tens of millions, of years,” said professor Connolly.
“If we lose these coral-rich environments the recovery of this biodiversity will take a very long-time, so our results highlight just how critical it is to conserve the coral reefs that exist today.”
Note : The above story is reprinted from materials provided by Michael Harper for redOrbit |
Gray foxes (Urocyon cinereoargenteus) are smaller, arboreal (tree-dwelling) cousins of both the red fox and the gray wolf. They are also native to Indiana, but they are so secretive they are not often spotted by humans, even though they might be living right alongside.
Gray foxes are generally orange on the underside with a distinctive, rich, speckled gray on the top; their beautiful coloration is reflected in their Latin name, cinereoargenteus, which means “ashen silver”. They weigh between 8-15 pounds (like small house cats), and are from 30-44 in long, including the tail; females are slightly smaller than males. Gray foxes do not have the black “socks” on their legs and feet that are commonly seen in red foxes; and, while the red fox has slit, cat-like pupils, the gray fox has oval pupils.
The gray fox is rare among canids in its affinity for tree climbing, a trait shared only with the Asian raccoon dog. (Other canids can climb trees, but they do not generally live in trees, or spend so much time there, as does the gray fox.) The gray fox is crepuscular (active at dawn and dusk) or in some areas nocturnal. During the day it dens in hollow trees or stumps or in burrows “borrowed” from other species. Gray fox tree dens may be located up to 30 feet above the ground. The foxes will bring back the remains of their dinners to the den and the tree may end up festooned with macabre “ornaments”.
Like the red fox, the gray fox is an opportunistic, omnivorous and solitary hunter (it does not hunt in packs like wolves). It is happy to eat meat (rabbits, birds, and fish), insects, and fruit wherever it can find them.
Gray foxes mate in February to early March. Gestation lasts approximately 53 days, with kits being born in late April to early May in litter sizes from 1 to 7. Kits grow quickly, and begin hunting with their parents around three months of age. They remain in their natal group until they reach sexual maturity in late autumn.
The gray fox, and the related Channel Island fox (Urocyon littoralis) are the only currently extant members of the genus Urocyon. The gray fox was once abundant through the eastern United States; although it is still found there, encroachment by humans has reduced their habitat and allowed red foxes (which are more adaptable) to become more common. The gray fox is still more common than the red across some parts of the western United States, however. |
Patriotic assimilation is the bond that allows America to be a nation of immigrants. Without it, America either ceases to be a nation, becoming instead a hodgepodge of groups—or it becomes a nation that can no longer welcome immigrants. It cannot be both a unified nation and a place that welcomes immigrants without patriotic assimilation.
Over the past few decades, however, America has drifted away from assimilating immigrants. Elites—in the government, the culture, and the academy—have led a push toward multiculturalism, which emphasizes group differences. This transformation has taken place with little input from rank-and-file Americans, who overwhelmingly support assimilation. As Ronald Reagan worried just as it was first getting underway, this tectonic shift that “divides us into minority groups” was initiated by political opportunists “to create voting blocs.” Because presidential elections are times of national conversation, candidates of both parties are now uniquely placed to give the nation the debate on assimilation it has never had. For this, we need a thorough historical understanding of how the United States has dealt with both immigration and ethnic diversity for centuries.
Immigrants from Ireland and Germany began to settle among the original English colonists in Massachusetts, Virginia, and Pennsylvania almost from the start, altering the political outlook of the colonies. Diversity also came through the acquisition of territory. With the addition of New Amsterdam in 1664—later renamed New York—the colonies gained a polyglot city in whose streets 18 languages were spoken.
All immigrants faced prejudice and segregation at times. Two early groups, the Germans and the Northern Irish, particularly faced opposition. Benjamin Franklin said of the first, “Why should the Palatine Boors be suffered to swarm into our Settlements?” As for the Northern Irish, in 1720, Boston passed an ordinance that directed “certain families recently arriving from Ireland to move off.” The immigrants overcame such adversity on their own. The Founders would have found repugnant the idea of intervening by giving groups special privileges or benefits.
The Founders worried that diversity could get in the way of national unity. Alexander Hamilton wrote that “the safety of a republic depends essentially on the energy of a common national sentiment; on a uniformity of principles and habits.” Immigrants were welcome, but in the hope that, as Washington put it, they “get assimilated to our customs, measures, and laws: in a word, soon become one people.” Adherence to the universal principles of equality, liberty, and limited government contained in the founding documents, as well as to virtues that made a constitutional republic viable—like frugality, industry, and moderation—would bind Americans together regardless of origin.
Because these principles could not be expected to take root by themselves, a system of so-called Common Schools rose in the early 19th century to educate and assimilate the children of immigrants. Early visitors like Alexis de Tocqueville noted that “in the United States, the instruction of the people powerfully contributes to the support of a democratic republic.”
Immigrants from Northern Europe, who started arriving in large numbers in the 1840s, benefited greatly from these schools. Abraham Lincoln, a great believer in assimilation who fought anti-immigrant forces in the mid-1800s, said it was belief in the sentiments and principles of the Founding that made immigrants Americans. By the 1880s, German-born Wisconsin congressman Richard Guenther was telling crowds, “We are no longer Germans; we are Americans.”
In the 1890s the country experienced a rise in immigration from different sources. Italians, Slavs, Jews, Hungarians, Greeks, Armenians, Lebanese, and others began to enter the country through Ellis Island. They encountered renewed opposition from nativists who said the new arrivals could never be Americanized. Immigrants from Asia fared worse. So-called transnationalists rose, too, to disparage assimilation—in their case because they disdained America and sought instead “a federation of cultures.”
Assimilationist forces stepped in again, this time with such men as Presidents Theodore Roosevelt and Woodrow Wilson. Supreme Court Justice Louis Brandeis, speaking in Boston on July 4, 1915, said that immigrants “must be brought into complete harmony with our ideals and aspirations and cooperate with us for their attainment.”
The assimilationist philosophy of Washington, Hamilton, Lincoln, Roosevelt, and Brandeis remained central to the country for most of the 20th century, until it began to break down in the 1970s. For the past 40 years, America’s new political, educational, corporate, and cultural elites have progressively pushed the country in the opposite direction. This new transnationalism—multiculturalism—is an attempt to make ethnic differences permanent by rewarding separate identities and group attachment with benefits, thus deterring national unity by requiring Americans to remain sorted into separate ethnic categories.
This new arrangement, dubbed by the historian David A. Hollinger “the ethno-racial pentagon,” divided the country into whites, African-Americans, Hispanics, Asians, and Native Americans. This unheard-of division of America into official groups was taking place just as the country was about to absorb the biggest wave of immigrants since the Ellis Islanders of 1890–1924. Changes in immigration law in the mid-1960s ended restrictionist policies and led to the next surge in immigration, this time largely from Latin America and Asia. As they arrived, new immigrants discovered they would be considered “minorities,” conceptually precluding from the start their full assimilation into the larger society.
As Nathan Glazer put it in 1988, “We had seen many groups become part of the United States through immigration, and we had seen each in turn overcoming some degree of discrimination to become integrated into American society. This process did not seem to need the active involvement of government, determining the proper degree of participation of each group in employment and education.”
Special treatment for specific groups by the federal bureaucracy implies betrayal and rejection of the principles espoused by every American leader from Washington through Reagan. This approach has contaminated our schools, preventing them from teaching civic principles and reverence for the nation—including lessons on how those principles have helped leaders repair the nation’s faults. The new approach also threatens the cherished American principle of equal treatment under the law.
This radical reordering was a top-down effort, not a response to a demand from below. PayPal founder Peter Thiel and Internet entrepreneur David O. Sacks, among others, call multiculturalism a “word game” that hides a “comprehensive and detailed worldview” that is used by American leftists to introduce radical policy ideas when “an honest discussion would not lead to results that fit the desired agenda.” As John Skrentny described it, “[I]t is striking that the civil-rights administrators—without any public debate, data, or legal basis—decided on an ethnoracial standard for victimhood and discrimination that officially divided the country into oppressed (blacks, Latinos, Native American, Asian Americans) and oppressors (all white non-Latinos).”
America owes itself an open, honest debate on multiculturalism and assimilation. Presidential candidates should ask the following five questions: (1) Why does the government need to divide Americans into demographic categories based on racialist thinking? (2) Can any society survive a sustained denigration of its history and principles through indoctrination in schools and universities? (3) Why should we continue to let the teachers unions block meaningful school-choice reform which would help to liberate immigrants from factors that threaten to relegate them to a permanent subordinate class? (4) Should the country strengthen citizenship requirements in order to make naturalization truly transformative? (5) Should the government continue policies that harm family formation and church participation, knowing that families and churches have historically been incubators of Americanization?
Candidates should not be intimidated. The vast majority of Americans support Americanization. Patriotic assimilation is a liberating, welcoming action, a proposition only a nation like America can confidently offer those born overseas. Previous waves of immigrants have found the correct balance between keeping their traditions and adopting America’s virtues, between pride in their ancestry and love of their new country. The new wave of immigrants can do the same.
A Nation of Immigrants
Even before the United States was the United States, it was a nation of immigrants. Small numbers of Irish settlers made their homes in Massachusetts and Virginia as early as the 1630s. Fifty years later, German Pietists and other religious dissenters started pouring into provincial Pennsylvania to seek freedom of conscience in William Penn’s Quaker experimental colony. The first permanent settlement comprised 13 families of Dutch-speaking Mennonites from Krefeld who arrived on July 24, 1683, in what is today appropriately known as Germantown, Pennsylvania. Germans continued to arrive in Pennsylvania in the 18th century at the rate of about 2,000 a year, so that by 1790—two years after the Constitution was ratified—ethnic Germans made up one-third of the state of Pennsylvania and about 7 percent of the entire population of the newly constituted United States. Along the way, they changed the politics of the colony. In one of the first partisan divisions in the colonies, they sided in the 1720s with the Quaker party, with whom they shared social tenets, against the Gentleman’s party, “composed principally of Anglican merchants, seamen and Scots-Irish immigrants”—giving the Quakers prohibitive electoral majorities in the Provincial Assembly until the 1750s.
As for diversity, both immigration and territorial acquisition ensured its presence in America from the start. When the English acquired New Amsterdam in 1664 (which they renamed New York), they inherited a polyglot city in whose streets, according to visiting French Jesuit Isaac Jogues in 1643, 18 languages were spoken. The city also already included an important Jewish community. By 1790, Americans of English stock were already a minority (49.2 percent of the population) throughout the country.
The Challenges of Diversity
As a result of its diverse composition, America benefited early on from the advantages that come with the meeting and blending of cultures. The nation also learned how to deal with the threats to national identity that accompany a regular influx of newcomers. Benjamin Franklin, for example, admired the thrift and industry of ethnic Germans in Pennsylvania, but famously worried about their refusal to speak the national language—English—and the impact they were having on the electoral process. In a 1753 letter to the botanist Peter Collinson, Franklin wrote:
I remember when they modestly declined intermeddling in our Elections, but now they come in droves, and carry all before them, except in one or two Counties; Few of their children in the Country learn English; they import many Books from Germany; and of the six printing houses in the Province, two are entirely German, two half German half English, and but two entirely English; They have one German News-paper, and one half German. Advertisements intended to be general are now printed in Dutch and English; the Signs in our Streets have inscriptions in both languages, and in some places only German: They begin of late to make all their Bonds and other legal Writings in their own Language, which (though I think it ought not to be) are allowed good in our Courts, where the German Business so encreases that there is continual need of Interpreters; and I suppose in a few years they will be also necessary in the Assembly, to tell one half of our Legislators what the other half say.
Two years earlier, Franklin had gone even further, writing of the Germans in Pennsylvania:
Why should the Palatine Boors be suffered to swarm into our Settlements, and by herding together establish their Language and Manners to the Exclusion of ours? Why should Pennsylvania, founded by the English, become a Colony of Aliens, who will shortly be so numerous as to Germanize us instead of our Anglifying them, and will never adopt our Language or Customs, any more than they can acquire our Complexion.
Another large group of immigrants, the Scots-Irish from Ulster, also caused consternation among the settled population. In 1720 Boston passed an ordinance that directed “certain families recently arriving from Ireland to move off.” When they did relocate to Worcester, a Puritan mob there burned down their church. In 1729, Boston residents rioted in the streets to prevent the docking of ships carrying Scots-Irish immigrants from Ulster.
Things did not go better for the Scots-Irish in Pennsylvania. Its provincial secretary James Logan, facing Indian attacks on settlements and realizing that “Penn’s dream of forming a government with strictly pacifist principles in this raw frontier was impractical,” saw opportunity in the warlike Ulstermen. He invited them to come and settle in Appalachia to create a buffer zone between the Indians and the Quakers. But he soon regretted having done so, and he wrote to a friend later that “a settlement of five families from the North of Ireland gives me more trouble than fifty of any other people.”
Such prejudice doubtless took a toll on those at the receiving end. Writing more than a century later, Andrew Sachse recounted bitterly how, because of their “tenacious adherence to their mother tongue,” the early German immigrants were subject to accusations of heresy or worse, adding that “these calumnies have been repeated so often in print that they are now received as truth by the casual reader.” Sachse was especially condemnatory of New England writers, who he said had given to readers “the impression that even the present generation of Pennsylvania-Germans of certain denominations are but a single remove from the animal creation.”
Freedom from the “Demoralizing Influence of Privilege”
The Founders, however, would have found repugnant the idea of intervening to remedy such prejudices by giving groups special privileges or benefits, or by attempting to apportion their participation in society in any way. So long as the government rigorously pursued a policy of giving all free men equal protection, the onus was on the individual and his family to succeed or fail. This produced an immigrant ethic that lasted two centuries. As Linda Chavez wrote in 1994:
The history of American ethnic groups is one of overcoming disadvantage, of competing with those who were already here and proving themselves as competent as any who came before. Their fight was always to be treated the same as other Americans, never to be treated as special, certainly not to turn the temporary disadvantages they suffered into the basis for permanent entitlement.
Franklin made the message clear in writing to prospective immigrants, “with regard to encouragements for strangers from government, they are really only what are derived from good laws and liberty.” John Quincy Adams drove the point further in 1819 when he was Secretary of State, in a letter to a potential immigrant: “There is one principle which pervades all the institutions of this country, and which must always operate as an obstacle to the granting of favors to new comers. This is a land, not of privileges, but of equal rights…. Emigrants from Germany, therefore, or from elsewhere, coming here, are not to expect favors from the governments. They are to expect, if they choose to become citizens, equal rights with those of the natives of the country.”
The other side of the coin of being given no preferential treatment was that immigrants would be granted the same protection as natives, at that time an unheard-of privilege. George Washington made this protection clear in his letter to the Hebrew Congregation of Newport, Rhode Island, of August 21, 1790, in which he wrote:
It is now no more that toleration is spoken of as if it were the indulgence of one class of people that another enjoyed the exercise of their inherent natural rights, for, happily, the Government of the United States, which gives to bigotry no sanction, to persecution no assistance, requires only that they who live under its protection should demean themselves as good citizens in giving it on all occasions their effectual support.
This tradition lasted from the Founding into the modern era. In 1925, at the dedication of the cornerstone of a Jewish community center in Washington, DC, President Calvin Coolidge said:
Our country has done much for the Jews who have come here to accept its citizenship and assume their share of its responsibilities in the world. But I think the greatest thing it has done for them has been to receive them and treat them precisely as it has received and treated all others who have come to it. If our experiment in free institutions has proved anything, it is that the greatest privilege that can be conferred upon people in the mass is to free them from the demoralizing influence of privilege enjoyed by the few.
E Pluribus Unum—One Culture, Accessible to All
The Founders were aware that the diversity of the population posed a challenge for molding a new nation. Famous among them in this regard was Hamilton, himself born on the Caribbean island of Nevis, who wrote:
The safety of a republic depends essentially on the energy of a common national sentiment; on a uniformity of principles and habits; on the exemption of the citizens from foreign bias, and prejudice; and on that love of country which will almost invariably be found to be closely connected with birth, education, and family.
Descent from England could not be the binding agent to hold the new nation together. The Revolution had had a transformative effect on the American colonists, severing links to a mother country they had, after all, just fought for eight long years. The Revolution “had deprived English culture of much of its claim to a natural position of superiority in America.”
The new bond would be adherence to the universal principles of equality, liberty, and limited government contained in the founding documents. These rights were so timeless that they came from the Creator; the conservation of a government that protected those rights depended on nurturing the right virtues and habits, and each succeeding generation would have responsibility for the continuation of such an arrangement.
The expectation of an emotional attachment to the unchanging principles of a nation’s Founding, and to the parchments in which they were codified, was uniquely American. “What would make the United States different was its demand that citizens give their allegiance to a set of political principles.” Enumerating the principles into documents was also distinctively American. The Mayflower Compact that the Pilgrims had signed with non-dissenters, or “Strangers,” aboard the ship to agree on “a Civil Body Politic” foreshadowed this tendency. This habit of writing principles into documents and then abiding by them stood in stark contrast to the governance of England, which to this day lacks a written constitution. As the British writer G. K. Chesterton put it, “America is the only nation in the world that is founded on a creed. That creed is set forth with dogmatic and even theological lucidity in the Declaration of Independence.”
Allegiance to the creed and its texts was to be based on deep loyalty, not mere practicality. “Rational adherence must be fortified with emotional attachment,” writes the Hudson Institute’s John Fonte. As James Madison wrote in Federalist 49, the new republic needed something more instinctive and primitive. The United States of America, its principles and its documents, deserved “that veneration which time bestows on everything, and without which perhaps the wisest and freest governments would not possess the requisite stability.”
The ideological component of the emerging nation would also meld disparate ethnicities into one people, binding different groups into a politically monocultural America. Immigrants, said Hamilton, would be drawn gradually into civil society “to enable aliens to get rid of foreign and acquire American attachments; to learn the principles and imbibe the spirit of our government; and to admit of a probability, at least, of their feeling a real interest in our affairs.”
Or as Washington wrote to Adams, the hope was that “by an intermixture with our people, they, or their descendants, get assimilated to our customs, measures, and laws: in a word, soon become one people.”
E Pluribus Unum, the official motto in the Great Seal of the United States, demonstrated this urge for unity. In Latin it means “Out of Many, One,” and it has been through the centuries a reminder of the imperative of uniting different groups. Though the seal today has at its centerpiece the American bald eagle, the Continental Congress in 1776 considered displaying the heraldic symbols of England, Scotland, Ireland, France, Germany, and Holland—the main constituent groups at the time of the signing. This is a clear sign that to them E Pluribus Unum meant one nation formed out of many ethnic groups. Among all nations of the world at the time, then, the newly formed United States was to be the only one not instituted along hereditary ethnicity or as the result of strategic dynastic marriages.
This made America exceptional. From the beginning, Americans felt that these principles made them the “City upon the Hill” that future Massachusetts Bay Governor John Winthrop had promised Puritans aboard the Arabella in 1630—though they now felt this for political, not religious, reasons. This was a proposition that, until very recently, American leaders confidently asserted.
As John Quincy Adams put it:
That feeling of superiority over other nations which you have noticed, and which has been so offensive to other strangers, who have visited these shores, arises from the consciousness of every individual that, as a member of society, no man in the country is above him; and, exulting in this sentiment, he looks down upon those nations where the mass of the people feel themselves the inferiors of privileged classes, and where men are high or low, according to the accidents of their birth.
In turn, those strangers from other nations who came searching for economic opportunities, individual liberty, or political freedoms renewed America’s commitment to all three as long as they subscribed to the virtues that had made the republic. Franklin spoke to this in a letter to Samuel Cooper, written in 1777, in the midst of the war:
Those who live under arbitrary Power do never the less approve of Liberty, and wish for it. They almost despair of recovering it in Europe; they read the Translations of our separate Colony Constitutions with Rapture, and there are such Numbers every where who talk of Removing to America with their Families and Fortunes as soon as Peace and our Independence shall be established, that tis generally believed we shall have a prodigious Addition of Strength, Wealth and Arts.
Thomas Paine, in the influential 1776 pamphlet Common Sense, likewise observed: “This new World hath been the asylum for the persecuted lovers of civil and religious liberty.” George Washington repeated the sentiment 12 years later almost verbatim when he wrote, “I had always hoped that this land might become a safe and agreeable Asylum to the virtuous and persecuted part of mankind, to whatever nation they might belong.” He also listed the traits Americans needed: The new country would welcome those who were “determined to be sober, industrious and virtuous members of society.” Franklin, for himself, listed 13 characteristically American virtues in his autobiography, ranging from temperance and frugality to industry and moderation.
The American Character, Forged in the Classroom
These personal and civil qualities that the Founders deemed essential to maintain the republic could not be expected to bloom unaided. Because civic virtues and civic love needed to be sown into citizens from early on, education became a requisite of constitutional government. James Madison wrote that “[k]nowledge will forever govern ignorance, and a people who mean to be their own governors must arm themselves with the power which knowledge gives.”
Jefferson is one of the Founders most associated with education. He was so concerned with it that he established the University of Virginia and then chose to include this achievement in his epitaph, leaving out that he had twice been elected President of the United States and written the Declaration of Independence. He wrote to John Adams in 1813 that education should be “the keystone in the arch of our government.” Earlier on, in a letter to George Washington in 1786, he had written,
It is an axiom in my mind that our liberty can never be safe but in the hands of the people themselves, and that too of the people with a certain degree of instruction.
Because a constitutional republic could degenerate into tyranny, the schoolhouse and the university would incubate republican principles. In a bill he introduced to the Virginia legislature in 1778 and again in 1780, Jefferson wrote that it was beneficial “for promoting the public happiness that those persons, whom nature hath endowed with genius and virtue, should be rendered by liberal education worthy to receive, and able to guard the sacred deposit of the rights and liberties of their fellow citizens.”
“Common Schools” rose among the different states in the first half of the 19th century to instill precisely the moral and civic principles that a constitutional republic required. Their funding was “largely market-based until the Civil War.” They had, as Mark Edward DeForrest put it,
a large role in assimilating and educating the offspring of the immigrants then moving into the United States from Europe. The schools did not simply educate students in the basics of the English language or the Three Rs. Rather, the schools were actively involved in promoting the values and beliefs that were considered part and parcel of the American experience.
The common school “and the vision of American life that it embodied came to be vested with a religious seriousness and exaltation. It became the core institution of American society,” wrote education historian Charles Glenn. The schools were “perhaps the most noble and practical reform experiment of the first half of the nineteenth century.”
By the time the French aristocrat Alexis de Tocqueville was making the rounds of the United States in the 1830s and jotting down his observations of the country, education was already deeply intertwined with the emerging republican and egalitarian character of the new country. Even the most rough-hewn pioneer penetrated the backwoods “with the Bible, an axe, and a file of newspapers.” This did not happen in a vacuum, wrote de Tocqueville, adding: “It cannot be doubted that, in the United States, the instruction of the people powerfully contributes to the support of a democratic republic; and such must always be the case, I believe, where instruction which awakens the understanding is not separated from moral education which amends the heart.”
The blueprint for assimilation—for Americanization—laid down by the Founders and the succeeding generation was in place by the second quarter of the 19th century: reverence for the founding documents and the principles they contained; the nurturing of self-reliant virtues; and the promotion of these principles in American schools. The Founders did not get everything right; the Naturalization Act of 1790 limited naturalization to free white persons, excluding non-whites and indentured servants. Still, the Encyclopedia of U.S. Political History deems the act “easily the most generous and open in American History.”
This blueprint quickly started yielding results. The Scots-Irish who entered the backwoods with the Bible, an ax, and a pile of newspapers—and who had earlier so repelled the citizens of New England and Pennsylvania—produced their first President in 1828. Andrew Jackson, the first non-English President and the first from the wild frontier, personified the belligerent traits then associated with his ancestors. His successor Martin Van Buren, in turn, has been called “the first ethnic President” because he was the first descended not from people from the British Islands, but from New York’s Dutch settlers. Van Buren is to this day the only President to speak with a foreign accent.
The New Immigrants
The importance of this assimilationist blueprint became clear as immigration quickly rose in the mid-decades of the 19th century and then further accelerated at the turn of the century. The first surge began in the 1840s, with the arrival of Catholic native Irish fleeing the Potato Famine, as well as a new wave of Germans who came this time not for religious but economic reasons.
The new immigrants flocked to the cities of a rapidly urbanizing America—especially New York, “where two-thirds of all immigrants landed between 1820 and 1860”—and many, especially the Irish, stayed. Corrupt politicians and party machines like New York’s Tammany Hall began to sever the line between citizenship and patriotism by fraudulently naturalizing immigrants by the thousands in order to secure their votes. Between 1856 and 1867, “naturalizations during years with a presidential race increased by as much as 462 percent over the previous year.” Party machines did something else, too: “they openly fostered group reaction rather than individual reflection.” This phenomenon was not limited to the cities. The territories and states sprouting west of the Appalachians also competed with each other to attract new immigrants, with some sweetening the deal by temporarily allowing aliens to vote.
Unsurprisingly, a significant element of those with a prior stake in the franchise reacted negatively—a phenomenon often repeated from this point on in American history. The 1850s saw a rapid rise in a secretive anti-immigrant party, the Know-Nothings, which at one point reached one million members and 10,000 lodges.
A politician born in a log cabin had not forgotten the blueprint for assimilation, however. The idea that attachment to the founding documents, their principles, and the American way of life was the bond that united the nation found a champion in a gangly Senate candidate from Illinois. Abraham Lincoln is best known for winning the Civil War and emancipating the slaves, but he also confronted the nativist Know-Nothings, and he did so by re-asserting the ideological component of assimilation and citizenship. There is no better exposition of this view than a campaign speech he gave, to long and sustained applause, to a crowd in Chicago in 1858. In that speech, he linked immigrants to the Founders:
If they look back through this history to trace their connection with those days by blood, they find they have none, they cannot carry themselves back into that glorious epoch and make themselves feel that they are part of us, but when they look through that old Declaration of Independence they find that those old men say that “We hold these truths to be self-evident, that all men are created equal,” and then they feel that that moral sentiment taught in that day evidences their relation to those men, that it is the father of all moral principle in them, and that they have a right to claim it as though they were blood of the blood, and flesh of the flesh of the men who wrote that Declaration, and so they are. That is the electric cord in that Declaration that links the hearts of patriotic and liberty-loving men together, that will link those patriotic hearts as long as the love of freedom exists in the minds of men throughout the world.
The 1864 Republican Party platform with which Lincoln won re-election echoed again “the asylum of the persecuted” line from Paine and Washington and the idea that immigrants renewed America’s vigor.
The Germans and Scandinavians of the late 1800s famously clung to their language around the kitchen table, in church, and in some schools and newspapers. But the fact that patriotic assimilation was expected is clear from this passage in a speech given in the 1880s by Wisconsin congressman Richard Guenther, who was German-born: “After passing through the crucible of naturalization we are no longer Germans; we are Americans…. America first, last, and all the time. America against Germany; America against the world; America right or wrong; Always America.”
Immigration from Ireland, Germany, and Scandinavia waxed and waned in the post–Civil War years until, in the 1890s, the country experienced a rise in immigration from unlikely sources: Italians, Slavs of many different nationalities, Jews, Hungarians, Greeks, Armenians, Lebanese, and others from Southern and Eastern Europe and points beyond began to arrive in great numbers for the first time in America’s history. These immigrants, known as the Ellis Islanders because about 80 percent of them entered through the immigrant inspection station that opened at New York’s Ellis Island in 1892, encountered the same opposition as previous groups, or worse.
“They stood out together as a new and different kind of immigrant who, in the view of many Americans, posed a threat to the basic American character of the nation,” wrote Michael Barone. “They came, after all, not only with little money but with little experience in republican traditions or democratic politics, and to most Americans they seemed to be a different race, or races.”
Immigrants from Asia fared even worse. Under pressure from labor unions and Western interests, Congress in 1882 passed the Chinese Exclusion Act, which barred entry to people from China. Arrangements were made with Tokyo later to also exclude Japanese immigrants.
Some of the industrial Northeastern cities were dominated by immigrants or their children. Many of them—especially the Italians, the Jews, the Poles, and the Slovaks—had high rates of illiteracy. Fairly or unfairly, too, labor strikes were associated with new immigrants. Americans “were quick to interpret labor unrest, particularly in sectors of the economy manned mainly by foreign workers, as an importation of the new European radicalism.” Crime rose in many of these communities where immigrants clustered. Italian immigrants “accounted for a significant portion of the national rise in crime during the Ellis Island years; homicides were five to ten times more frequent among them than among other whites in America,” despite the fact that Italian immigrants “had lower crime rates than immigrants generally.”
Unsurprisingly, nativists rose again, this time with greater intensity, deeming the new immigrants as “unassimilable” as their Irish and German predecessors had been thought to be. After a particularly violent labor strike in Lawrence, Massachusetts, in which many immigrants were involved, the American sociologist John Graham Brooks asked, “What have we done that a pack of ignorant foreigners should hold us by the throat?” On several occasions, Southern mobs lynched Italians.
This time a new strand emerged, also opposed to assimilation but for a different reason from the nativists. This new group fought assimilation on the grounds that America was not good enough for foreign immigrants to want to assimilate to her ways. According to this new strain of thought, then called “transnationalism,” America should instead become a multicultural federation of subcultures.
The transnationalists were led by intellectuals—such as Randolph Bourne, Horace Kallen, and later Herbert Marcuse—whose writings dripped with disdain for America, its culture, and the experiment of the Founders. The attempt was to destroy what the country had become and replace it with something completely and quite literally alien. In one of the foundational texts of the time, a 1916 essay for the Atlantic Monthly entitled “Trans-National America,” Bourne disparages the original settlers for never “succeeding in transforming that colony into a real nation, with a tenacious, richly woven fabric of native culture.”
Bourne was of course writing of America in the early part of the 20th century, at the very beginning of what is now called the American Century. He was also referring to a country attracting millions, whose exercise in self-government had fired up the global imagination, when he wrote: “America has yet no impelling integrating force. It makes too easily for the detritus of cultures. In our loose, free country, no constraining national purpose, no tenacious folk tradition and folk style hold the people to a line.”
“It is apparently our lot to be a federation of cultures. This we have been for half a century and the war [World War I] has made it ever more evident that this is what we are destined to remain,” he added.
The Return of the Assimilationists
It is a testament to the reverence that America has commanded generation after generation that assimilationists have always risen to defy those who underestimated this country’s transformative, even redeeming, qualities. In the early 20th century, men from both parties—including Presidents Theodore Roosevelt and Woodrow Wilson, as well as Supreme Court Justice Louis Brandeis—rose up to respond to both the nativists and the transnationalists not just in their official actions, but by using their bully pulpits in speech after speech. By doing this, they were echoing what the Founders had done in the 1700s.
Justice Brandeis, himself the son of Jewish immigrants, spoke eloquently and at length about the question “What is Americanization?” to a Boston audience on July 4, 1915:
It manifests itself, in a superficial way, when the immigrant adopts the clothes, the manners and the customs generally prevailing here. Far more important is the manifestation presented when he substitutes for his mother tongue, the English language as the common medium of speech. But the adoption of our language, manners and customs is only a small part of the process. To become Americanized, the change wrought must be fundamental. However great his outward conformity, the immigrant is not Americanized unless his interests and affections have become deeply rooted here. And we properly demand of the immigrant even more than this. He must be brought into complete harmony with our ideals and aspirations and cooperate with us for their attainment. Only when this has been done, will he possess the national consciousness of an American.
In a letter in 1919, Roosevelt echoed Washington’s position relating assimilation to equal treatment:
In the first place we should insist that if the immigrant who comes here in good faith becomes an American and assimilates himself to us, he shall be treated on an exact equality with everyone else, for it is an outrage to discriminate against any such man because of creed, or birthplace, or origin. But this is predicated upon the man’s becoming in very fact an American, and nothing but an American.
In an apparent response to the transnationalists, Wilson in 1915 made clear that America was not going to become a federation of nations when he said, “America does not consist of groups. A man who thinks of himself as belonging to a particular national group in America has not yet become an American.”
All in all, about 35 million immigrants entered the United States between 1840 and 1920, about 25 million of whom came after 1880. The Immigration Act of 1924 severely limited immigration for decades, as a result of lobbying by restrictionists. But the assimilationists had unquestionably won the intellectual and cultural arguments about how to approach the millions of immigrants and the even larger number of their children who lived in the country at the time. Schools, companies, and civil society groups took up the challenge with numerous and unabashed assimilationist programs, classes, and other endeavors. In New York, where the vast majority of immigrants settled, the state legislature in 1898 passed a law “to encourage patriotic exercises” in “the schoolhouses of the state.” The state superintendent of public schools compiled a “Manual of Patriotism” replete with chapters on the flag, patriotic songs, patriotism, and the need for virtues such as hard work.
The immigrants and their descendants responded in kind—as all previous immigrants had since the 1600s—by assimilating to the habits and virtues of their new country and by patriotically giving the nation their loyalty, without giving up their familial customs, kitchen-table language, or love for their ancestral land. In the case of the Ellis Islanders, they became “the Greatest Generation”—the Americans who came through when their adoptive country faced existential tests by defeating the Nazis and Communism.
The New Transnationalism
The history above is the story of ethnic groups coming to America, seeking no entitlement, “overcoming disadvantage…and proving themselves as competent as those who came before.” It is also a story of immigrants assimilating to their new nation by developing an emotional attachment to the principles, habits, and characteristic virtues of the American way of life. This produced a free and self-ruling republic with a unique culture and national purpose. For the past 40 years, however, America’s new cultural and political elites have chosen to spin an opposing narrative. They have arrayed all the forces at their disposal—governmental, educational, corporate, and cultural—to dissimilate Americans into different groups, the very practice against which Wilson had thundered.
The catch-all term for this new philosophy is multiculturalism. Often portrayed by its promoters as nothing more than appreciation for other cultures—something no reasonable person could be against—multiculturalism in reality attempts to revive transnationalism as the organizing principle for a country of immigrants. While assimilation unites the country around affection for a set of principles, habits, and shared cultural experiences, multiculturalism is an attempt to make ethnic differences permanent by rewarding separate identities and group attachment with purported short-term benefits. It deters national unity by requiring Americans to reduce their complex heritage and national identity to a checkbox on a form.
The origin of our present drift toward Balkanization was benign and well-intentioned. The Civil Rights struggle of the 1960s was a gigantic step toward giving African Americans the equal protection under law that emancipation had failed to achieve a century earlier. Naturally, it led to a national conversation on remedies for the injustices suffered by black Americans. The main remedy proposed was “affirmative action.”
Originally, affirmative action mainly involved a necessary push for “race-blind” employment practices. In 1961 President Kennedy had used the term in Executive Order 10925, which required government contractors to “take affirmative action to ensure that applicants are employed, and that employees are treated during employment, without regard to their race, creed, color or national origin” (emphasis added). However, affirmative action soon transformed itself into the opposite—the enforcement of race-conscious policies in employment, school admissions, and government contracting. This, in turn, quickly metastasized into the idea that only members of a specific ethnic group could speak for other members of that group or represent them in elected office.
Throughout its history, America has always incorporated people from a broad range of strong ethnic identities. As individuals “assimilated to our customs, measures and laws” (in Washington’s words), they began to identify themselves as simply “American.” In contrast, the new groups artificially engineered by the government in the 1970s shared two important characteristics. They were both national in character—no longer constrained to provincial Pennsylvania or the tenements of Lower Manhattan, but extending to the whole United States—and at the same time supranational, five monolithic categories encompassing different nationalities that shared little in common beyond a patina of similarity.
This new arrangement, dubbed by the historian David A. Hollinger “the ethno-racial pentagon,” divided the country into white, black, brown, yellow, and red—that is, into Caucasians, African Americans, Hispanics, Asians, and Native Americans. Its proponents at the time celebrated this as the achievement of a “rainbow nation,” but seeing the United States—a country so devoted to the proposition that all men are created equal that it fought a civil war over it—divided along such a color spectrum should make for difficult reading.
In this division, “whites” were the varied Americans whose ancestors came from Europe over the centuries: the English, Scots, Irish, Poles, Germans, Italians, Slovaks, and Portuguese, who were once themselves thought of as belonging to different races. “Blacks” were Americans with origin in Africa, whether the descendants of African slaves brought to the United States, those who hail from the Caribbean, or those who emigrated on their own volition from Africa. “Brown” referred to Americans who originated in the former Iberian colonies of the New World, whether they were descendants of Europeans, of native peoples, or of Africans. “Yellow” denoted Americans with origins in Asia, whether Japan, Pakistan, the Philippines, China, India, or Cambodia. “Red” lumped all Native Americans, those immensely diverse clans and tribes with origins in North America before European colonization, into one monolithic group. The very fact that some of these color-based group names have now been jettisoned by preference of the groups themselves should itself raise alarm about the fact that the artificial divisions they represent are still so strong in driving social policy in America.
Next, policymakers divided the groups into “minorities” (all non-white groups) as against “the majority” (those identified as “white”), then assigned to all “minorities” the same preferences as African Americans—in accordance with the Marxist narrative that history is nothing but a stage for the struggle between the “oppressed” and the “dominant” group, or the “privileged” versus the “marginalized.” John Skrentny pinpointed this mutation in the national thinking when he wrote:
It was through affirmative action that policy makers carved out and gave official sanction to a new category of American: the minorities. Without much thought given to what they were doing, they created and legitimized for civil society a new discourse of race, group difference, and rights. This new discourse mirrored racist talk and ideas by reinforcing the racial difference of certain ethnic groups, most incongruously Latinos. In this discourse race was real and racial categories discrete and unproblematic. By dividing the world into “whites” and “minorities” (or later “people of color”) it sometimes obscured great differences among minority groups and among constituent groups within the pan-ethnic categories, so that Cubans and Mexicans officially became Latino, Japanese and Filipinos became Asian, and Italians, Poles, and Jews joined WASPs as white. Most profoundly, the minority rights revolution turned group victimhood into a basis of a positive national policy.
Several laws, administrative rules, and judicial decisions in the 1970s identified “minorities” as those groups with a history of disadvantage in the United States. For example, Public Law 94–311, passed by Congress in 1976, called for the Census Bureau to “implement an affirmative action program…for the employment of personnel of Spanish origin or descent,” because “a large number of Americans of Spanish origin or descent suffer from racial, social, economic and political discrimination and are denied the basic opportunities they deserve as American citizens and which should enable them to begin to lift themselves out of the poverty they now endure.”
The following year Congress passed Representative Parren Mitchell’s (D–MD) amendment to Public Law 95–28 setting aside 10 percent of government contracts for businesses owned by “Negroes, Spanish-speaking [sic], Orientals, Indians, Eskimos and Aleuts.”
Eventually women and non-heterosexuals were added to the groups protected by the government because “having experienced the same kind of systematic exclusion from the economy as the various minorities…they are considered as having ‘minority status.’” Whites were designated a group, too, but government literature made abundantly clear that the remedies were in place to correct for discrimination. In a take on Orwell, the Equal Employment Opportunity Commission (EEOC) states: “Every U.S. citizen is a member of some protected class, and is entitled to the benefits of EEO law. However, the EEO laws were passed to correct a history of unfavorable treatment of women and minority group members.”
The New Wave of Immigrants
This unheard-of division of America into official groups was taking place just as the country was about to absorb the biggest wave of immigrants since the Ellis Islanders of 1890–1924. Changes in immigration law in the mid-1960s ended the 1924 restrictionist policies that had favored Northern Europeans and led to the next surge in immigration, this time largely from Latin America and Asia. An estimated 33.7 million legal immigrants entered the United States between 1970 and 2012, according to the Department of Homeland Security. Of that total, 26.2 million, or 78 percent, came from Asia, Latin America, or the Caribbean (11.3 million from Asia, 14.8 million from Latin America and the Caribbean). Of these, 6.5 million, or about one-fifth, came from Mexico alone. These figures do not account for the estimated 11 million illegal residents of the United States, of whom about five to six million may be of Mexican birth.
Some 100,000 people of Mexican descent lived in the American West and Southwest at the end of the 1840s, after the end of the Mexican War and Texas’s accession to the Union. These were not immigrants but settlers obtained by territorial acquisition, just like the Dutch of New York and the French of Louisiana before them. However, they only accounted for around 0.4 percent of the U.S. population in 1850. The vast majority of today’s 54 million Hispanic U.S. residents, accounting for 17 percent of the population, arrived as immigrants after 1965.
Just as these new immigrants came in, they discovered that they were to be ghettoized into one of the subgroups created by the elites. Since they were labeled as “minorities,” any possibility of their assimilating into the larger society was conceptually precluded from the start. Government and cultural institutions now sought to instill into the new immigrants that—contrary to the traditional American credo—they could never overcome the circumstances of their birth but would henceforth have to conform to the ascribed norms of their assigned “group.” This societal and mental estrangement was the price to be paid for certain forms of preferential treatment. Left unsaid, of course, was that—as John Miller put it— “group rights are underwritten by failure” because they are contingent on the existence of prejudice, discrimination, and the inability of individuals to overcome such disadvantages without government support.
The difference between such treatment and that with which America met previous generations of immigrants could not have been more stark. As Nathan Glazer wrote in 1988 in Affirmative Discrimination: Ethnic Inequality and Public Policy:
We had seen many groups become part of the United States through immigration, and we had seen each in turn overcoming some degree of discrimination to become integrated into American society. This process did not seem to need the active involvement of government, determining the proper degree of participation of each group in employment and education.
The word “assimilation”—first used by Washington, then repeated confidently by his successors as the organizing principle of a country of immigrants—is now considered by the politically correct to be an ugly term, with connotations of stultifying and coerced homogenization or, at worst, “cultural ethnic cleansing.” President Obama’s New Task Force on New Americans does not mention “assimilation” once in its 70-page strategy paper. On the contrary, it calls on “welcoming communities”—that is, existing American communities—to adapt to the ways of new immigrants, who are encouraged only to naturalize and register to vote.
It is interesting to note that, years earlier, just as the shift from assimilation to the “minorities vs. majority” worldview was getting underway, both Dwight Eisenhower and Ronald Reagan had come to the same conclusion as President Obama. In July 1966, Eisenhower suggested that Reagan, then a gubernatorial candidate in California, run hard against the new minority concept at the “first major press conference” he held. Ike suggested that Reagan say:
In this campaign I’ve been presenting to the public some of the things I want to do for California—meaning for all the people of our State. I do not exclude any citizen from my concern and I make no distinctions among them on such invalid bases as color or creed.
Ike further advised Reagan,
Something conveying this meaning might well be slipped into every talk—such as “There are no ‘minority’ groups so far as I’m concerned. We are all Americans.”
Reagan’s response, too, was incredibly prescient:
I am in complete agreement about dropping the hyphen that presently divides us into minority groups. I’m convinced this “hyphenating” was done by our opponents to create voting blocs for political expediency.
The present approach to immigrants carries several deeply troubling implications. First, preferential treatment of any single group by the federal bureaucracy and our cultural institutions not only represents a betrayal of the principles espoused by every American leader from Washington through Reagan but requires an actual rejection of them. The Founders and their descendants belong to the “dominant group”—the dead white men—whose ways America is now being called on to transcend. This attitude has not only contaminated schools and universities, but it also now threatens the cherished American principle of equal treatment under the law.
The role that American schools played in teaching civic principles and reverence for the nation and its founding documents—including how those principles have helped leaders repair the nation’s faults—has been reversed. The examples of how the American educational system now teaches skepticism at best and derision at worst are too numerous to name. It suffices to say that Howard Zinn’s A People’s History of the United States—which slurs the Founders as men who sought to “take over land, profits, and political power” and thus create “a consensus of popular support for the rule of a new, privileged leadership”—continues to be the No. 1 political-science bestseller on Amazon 35 years after it was first published. In June of 2015, the University of California advised professors not to use such “microaggressive” phrases as “America is the land of opportunity” or “America is a melting pot.” Similarly, the revised Advanced Placement History curriculum framework promoted by the College Board since the autumn of 2014 emphasizes the “marginalized vs. privileged” narrative and makes no mention of Benjamin Franklin. This approach reverses that of New York’s 1904 “Manual of Patriotism,” which helped to shape the Greatest Generation.
In law, dissimilation has led to the new and egregious phenomenon of “cultural defense.” A member of an immigrant group who commits a crime under American law can now construct a defense by demonstrating that the criminal action is prescribed within the immigrant’s culture of origin. For example, when an Egyptian immigrant father recently killed his 17-year-old and 18-year-old daughters for dating non-Muslim men, even the FBI dubbed the case one of “honor killing.”
The new organizing principle includes racialist thinking that not only hearkens back to one of the ugliest flaws in U.S. history but even imports new ones. As John Miller put it, “[R]ace and ethnicity may be salient features of our political life, but they certainly should not be hardwired into our political institutions through a homeland policy that recalls South African apartheid.”
This is no hyperbole. Public Law 95–28, for example, was the first congressional act since 1854 to designate beneficiaries by race. As for Public Law 94–311, Rubén Rumbaut of the University of California at Irvine wrote that it was nothing less than “the first and only law in U.S. history that defines a specific ethnic group and mandates the collection, analysis, and publication of data for that group.”
As the Hudson Institute’s John Fonte has put it:
Multiculturalism attacks three pillars of the liberal democratic nation-state: (1) liberalism, by putting ascribed group rights over individual rights; (2) a strong and positive national identity, by emphasizing subnational group consciousness over national patriotism; and (3) majority-rule democracy.
The Need for a Debate
The radical reordering of how America absorbs newcomers came not as a response to a demand from below but as a top-down effort led by elites. PayPal founder Peter Thiel and Internet entrepreneur David O. Sacks, among others, call multiculturalism a “word game” that hides a “comprehensive and detailed worldview” and that is used by American leftists to introduce radical policy ideas when “an honest discussion would not lead to results that fit the desired agenda.” Whether this represents an unintentional phenomenon or a deliberate conspiracy, it is hard to deny that to the degree multiculturalism succeeds, it pushes America leftward.
It is hard to argue, too, that average Americans asked for such a rearrangement. The multicultural revolution is premised on the need for affirmative action to remedy “a history of unfavorable treatment,” in the language of the EEOC. But the vast majority of Asian and Hispanic Americans for whom this ostensible benefit is intended are post-1965 immigrants who sacrificed much to get to U.S. shores, or their children or grandchildren. In other words, they not only lacked any “history of unfavorable treatment,” but they also chose to wave aside the hardships that immigration imposes. These hardships were to them a lesser evil, compared to their problems at home.
There has been real discrimination against Hispanics in the West and Southwest, but it cannot compare to the life of African Americans under slavery or subsequently under Jim Crow laws in the South. Ezequiel Cabeza de Baca was the second elected governor of New Mexico in 1917—certainly a position of privilege—and he was preceded by other Hispanic leaders who governed New Mexico as a territory in the 19th century. Likewise, Cubans who immigrated to Tampa Bay, Key West, and Louisiana in the 19th century were subjected to the same treatment as other immigrants—and not, of course, to Jim Crow laws, unless they were Afro-Cuban—and were generally able to integrate well into existing American society at the time.
Mexican Americans, even those who suffered discrimination, showed strong aversion to being classified as disadvantaged. The UCLA sociologist Leo Grebler wrote in a massive 1970 study of Mexican Americans: “Indeed, merely calling Mexican-Americans a ‘minority’ and implying that the population is the victim of prejudice and discrimination has caused irritation among many.” Today, even though the terms “Hispanic” and “Latino” have been used for decades, many people designated as such still refuse to use these words when referring to themselves.
Similarly, though a larger portion of Asian Americans today are descended from those who suffered discriminatory laws in Western states in the first half of the 20th century—and even forced internment during World War II—it is not clear that affirmative action does anything to redress a history of past wrongdoing toward this population. If anything, Asians increasingly complain that affirmative action quotas harm them by denying them the number of admissions to colleges and universities that their academic accomplishments should suggest.
Rank-and-file Americans of all ethnicities were indeed not consulted about such a fundamental reordering of the way the nation has absorbed immigrants. As John Skrentny described it, “[I]t is striking that the civil-rights administrators—without any public debate, data or legal basis—decided on an ethnoracial standard for victimhood and discrimination that officially divided the country into oppressed (blacks, Latinos, Native American, Asian Americans) and oppressors (all white non-Latinos.)” Today, many Americans are still unaware of the history of this unilateral reorganization. Most think that the majority-versus-minority discourse has been around forever, not knowing that it was only introduced within the past 50 years.
America owes itself the opportunity to debate this issue. It is time to stop and ask what bureaucrats, politicians, and academics have done to the American ideals of equal treatment and equal opportunity for all. As Daniel Patrick Moynihan and Nathan Glazer put it in their 1970 introduction to their landmark 1963 book Beyond the Melting Pot, grouping Americans into “fantastic categories” that were each assigned a color on the spectrum is “biologically and humanly monstrous.”
Such officially enforced compartmentalization of the nation is deleterious first of all for the members of the “minorities” themselves. In a masterful and prescient study of assimilation, the California State University sociologist Milton M. Gordon predicted it would “prevent the formation of those bonds of intimacy and friendship which bind human beings together in the most meaningful moments of life and serve as a guard-wall against the formation of disruptive stereotypes.”
If perpetuating racialist compartmentalization is harmful to members of historically disadvantaged groups, it is still worse for the country as a whole. If the continuation of self-government and liberty depends on the willingness of citizens to sacrifice for a greater good—and that is in turn contingent on a patriotic feeling, which must rely on a sound educational grounding in civic virtues—then the present course of the nation is suicidal. Segregating Americans into pan-ethnoracial groups will lead to a “loss of popular concern with the common good.” In a crisis, such divisiveness could prove catastrophic.
A cursory glance at the many vicissitudes of societies such as Yugoslavia and Iraq, where group loyalty trumped national purpose, should make Americans apprehensive about importing group ideology as an organizing principle. Other Western societies such as Britain are now deciding to leave group ideology behind. In a speech after winning re-election in May, Prime Minister David Cameron announced that the U.K. would move beyond simply standing “neutral between different values,” adding:
That’s helped foster a narrative of extremism and grievance. This Government will conclusively turn the page on this failed approach. As the party of one nation, we will govern as one nation, and bring our country together. That means actively promoting certain values. Freedom of speech. Freedom of worship. Democracy. The rule of law. Equal rights regardless of race, gender or sexuality.
A presidential campaign is an ideal time to have this conversation. A true debate on multiculturalism and assimilation in America can take shape along the following lines:
- Re-evaluate the practice of segregating American residents by group. Americans—all Americans, not just those who believe they know better—need to ask why, or whether, it is necessary to segregate American residents in the constitutionally mandated decennial Census or any of the other surveys the U.S. government conducts in-between. If it is demographical information that the country and its academics need, then the government could ask residents to check boxes according to nationalities of origin. But there is no need to deepen cultural cleavages by encouraging people to identify with one of five bureaucratically created pan-national groups. The U.S. government should stop bombarding Americans who originated in Latin America and Asia with the message that they are victims.
- Debate how the Founding history of the United States should be taught in schools and universities. Likewise, Americans need a robust debate on the indoctrination of young minds by schools and universities. No society can survive a sustained denigration of its history and principles. From the Common Schools to the public schools, past generations used the classroom to give newcomers and natives alike a strong grounding in America’s history, including such stains as slavery but also such high points as the transformative effects of the Founding on Western history. Twisting the Founding into a Marxist narrative of a “privileged class” conspiring to rapaciously grab “land, profits, and political power” not only teaches students an outrageous falsehood, but it is also a form of national suicide.
- Allow school-choice reform. One way that those designated as minorities today are indeed victims who risk being relegated to a permanent subordinate class is by being left behind educationally. This is especially true for those designated as Hispanics, whom the National Assessment of Educational Progress shows to consistently lag behind. Not coincidentally, Hispanics are overrepresented in public schools, especially in urban areas where public schools most struggle and underperform. Unfortunately, efforts to fix the system are often stymied by politicians’ dependence on support from teachers unions that oppose true reforms. Candidates should ask why the country should continue to let teachers unions block meaningful school-choice reform, thus relegating immigrants and their children to the status of a permanent subordinate class. The outpouring of popular opposition to the Common Core curriculum is a hopeful sign that Americans may be taking school issues seriously.
- Strengthen citizenship requirements. Another area that needs debate is strengthening citizenship requirements. Some 54 percent of the 41 million foreign-born U.S. residents in 2012 were not citizens (around 22 million), so making civic and patriotic instruction part of the naturalization process would go a long way toward making naturalization a truly transformative experience and would help assimilate a significant portion of immigrant communities. Any immigration reform that a future President signs should include reinforcing citizenship requirements so as to start our new compatriots with a deep and thorough understanding of the elements that make America exceptional.
- Government policies should not harm family formation and church participation. Not all assimilationist efforts are within the direct reach of government. The vast scholarship on previous immigrant waves shows that strong and intact families strengthen patriotic assimilation and religiously active families even more so. Out-of-wedlock births, for example, are clearly linked to educational underperformance and to social pathologies and dysfunctions that are difficult to reverse later in life. The American family, as Kay Hymowitz states, always had the mission of shaping “children into citizens in a democratic polity.” As for church attendance, it is of course a matter of personal salvation, but there are clear temporal benefits. Vast amounts of research show that children who attend church regularly result in having more years of schooling, helping with economic mobility, and assimilation. Both research and past experience also demonstrate that active church participation can help “make Americans” by teaching such civic values as volunteerism and other vital aspects of our culture.
This debate will not be easy. It will be a brave presidential candidate of either party that takes up both a critique of multiculturalism and a conversation on solutions to strengthen assimilation. Assimilation is now derided in the academy and the media as a coercion of immigrants into stultifying conformity. This phenomenon aside, the vast majority of Americans support Americanization. In a recent Harris poll, 83 percent of respondents said that Americans “share a unique national identity based on a standard set of beliefs, values and culture,” and 90 percent believe that Americanization “is important in order for immigrants to successfully fulfill their duties as American citizens.” Nine out of 10 Hispanics agree with that view.
These Americans know that even if assimilation can be at worst “a brutal but necessary bargain,” as several writers have described it, it is also at best a liberating, welcoming action, a proposition only a nation like America can confidently offer those born overseas. Previous waves of immigrants have found the correct balance between keeping their traditions and adopting America’s virtues, between pride in their ancestry and love of their new country. The new wave can do so as well. Patriotic assimilation is the bond that allows America to be a nation of immigrants. Without it, America would cease to be a nation at all, becoming instead a hodgepodge of groups that could no longer meaningfully welcome immigrants into a commonly shared, characteristic way of life. Like immigrant groups themselves, America can be trusted to find a sustainable balance between honoring the unique cultures from which diverse Americans come and integrating all Americans into a unified nation, just as it always has.Mike Gonzalez is a Senior Fellow in the Kathryn and Shelby Cullom Davis Institute for National Security and Foreign Policy at The Heritage Foundation. |
Read this article to learn about the concept, nature, process and principles of delegation of authority.
Concept of Delegation of Authority:
Delegation of authority could be defined as follows:
When, out of the total authority, held by a superior, a portion thereof is passed on by the superior to a subordinate to enable the latter to perform some job on behalf of the former, for organizational purposes; it is known as delegation of authority.
On the basis of the above definition, we can derive the following salient features of the concept of delegation of authority:
(i) No manager can delegate his total authority to a subordinate. He can pass on only a portion of his authority to the subordinate. In case otherwise, his own status would disappear; and that is not possible in or allowed by management theory. For example, in case of the Indian Administration, the Prime Minister has virtually all the powers for the administration of the Indian Economy.
However, certain emergency powers are reserved for the President of India; who represents the uppermost link in the management hierarchy of the nation.
Similarly, in the case of a corporate enterprise, the Board of Directors are clothed with substantial powers for the management of the company; but there are certain matters which can be decided by the Board, only after seeking approval of the Body of Members the latter being the uppermost link in the management hierarchy of the company
(ii) No manager can give that authority to a subordinate, which the former himself does not possess. This doctrine is based on the legal maxim of “nemo dat quod non habet” which means that no one can give to others what he himself has not got.
(iii) The idea behind delegation of authority is that of representation of the superior by the subordinate i.e. the subordinate, after delegation of authority by the superior, is supposed to behave and act in a manner, in which the superior, himself, would have behaved and acted.
(iv) Delegation of authority is made by a superior to a subordinate, only for organisational purposes; and not for the fulfillment of the personal purposes of the superior. In the latter case, delegation of authority would amount to a misuse of authority by the superior; and would result in gross corruption.
(v) Delegation of authority does not imply a reduction in the power of the superior. It is something like knowledge which is still retained by a teacher ever after imparting it to pupils.
Following are some popular definitions of delegation of authority:
(1) “Delegation is the dynamics of management; it is the process a manager follows in dividing the work assigned to him so that he performs that part which only he, because of his unique organisational placement, can perform effectively, and so that he can get others to help him with what remains.” – Louis A. Allen
(2) “Authority is delegated when enterprise discretion is vested in a subordinate by a superior.” – Koontz and O’Donnell
Nature of Delegation of Authority:
In order to understand the nature of delegation of authority, the same could be considered from two perspectives:
(a)’ Delegation as the basic process for creating an organizational structure and
(b)’ Delegation as the personal art of a manager.
Let us describe briefly the above two concepts of delegation of authority.
(a) Delegation as the Basic Process for Creating an Organisational Structure:
In a group enterprise, specially an enterprise of moderate to giant size, it is not possible for any one person (or a small group of persons) to perform all the activities managerial and operational- exclusively on his own. It is, therefore, imperative that an organisational structure be created; and different roles be assigned to a number of individuals.
Delegation of authority is the very basic process, which is employed in creating such an organisational structure. In this sense, delegation of authority is based on the elementary principle of division of work.
(b) Delegation as the Personal Art of a Manager:
A manager, at any time, during his managerial tenure, might feel the necessity for delegation of authority to a subordinate; when the former finds himself over-burdened with work. By selecting a suitable and competent subordinate and delegating authority to him, the superior can multiply himself and can perform manifold tasks.
How well a manager delegates and how effectively he ensures best results out of the subordinate; is really the personal art of the manager concerned.
Process of Delegation of Authority:
The entire process of delegation of authority entails the following steps:
(a) Determination of the results expected of the subordinate.
(b) Assignment of duties/tasks /job to the subordinate.
(c) Delegation of authority to the subordinate.
(d) Fixation of responsibility on the subordinate.
The following is brief comment on each of the above steps:
(a) Determination of the Results Expected of the Subordinate:
While planning to delegate authority, first of all, the superior has to make a determination of what results (i.e. how much performance and of what quality), could be expected of the subordinate who is to be delegated authority.
This step naturally requires an estimate of the apparent competence of the subordinate. Delegation of authority, without due regard to the competence of the subordinate, in all good possibility, would lead to poor delegation incapable of yielding the desired results.
(b) Assignment of Duties/Tasks /Job to the Subordinate:
In view of the competence of the subordinate, duties or tasks or job is assigned to him. The duties assigned to the subordinate might concern some specific job or certain functions or a certain target to be attained.
(c) Delegation of Authority to the Subordinate:
As the next logical sequence in the process of delegation, necessary authority is granted or delegated to the subordinate to enable the subordinate perform the assigned work effectively and without interruption.
(d) Fixation of Responsibility on the Subordinate:
Delegation of authority would remain not only incomplete but also ineffective, unless and until, the subordinate to whom authority is delegated is made answerable to the superior (who delegates authority to the former) for the proper and efficient discharge of the assigned work. As per this step i.e. the final step of delegation, responsibility is fixed on the subordinate.
Point of comment:
These four steps of the process of delegation might be well regarded as the four pillars – supporting the building of the delegation of authority. If any of these four pillars is removed off the building; the same is likely to collapse.
Principles of Delegation of Authority:
In order to ensure effective delegation of authority, observance of some principles is but necessary.
Following are suggested to be certain principles, which pave way for effective delegation:
(i) Principle of non-delegation of personalized matters.
(ii) Principle of delegation by the results expected.
(iii) Principle of unity of command
(iv) Principle of scalar chain.
(v) Principle of parity of authority and responsibility
(vi) Principle of absolute responsibility.
Let us examine each of the above principles form the standpoint of delegation of authority i.e. how each such principle relates in the context of delegation; leads to the betterment of the process of delegation and ensures the attainment of the results expected of delegation of authority.
(i) Principle of Non-delegation of Personalized Matters:
According to this principle, there are certain matters which must be handled by the superior alone in his personal capacity; because he happens to be the fittest person for that purpose in view of his placement in the organization. Therefore, authority for deciding about personalized matters must not be delegated to any subordinate.
(ii) Principle of Delegation by the Results Expected:
While undertaking the process of delegation, the superior must delegate the requisite authority to the subordinate i.e. as much authority as is necessary for the subordinate in view of the results expected of the latter.
Delegation of more authority than required is likely to be wasted and rather misused; while, delegation of lesser authority than needed, would surely interfere with the free and smooth functioning on the part of the subordinate.
(iii) Principle of Unity of Command:
This well-appreciated principle of organisation, in the context of effective delegation requires that, at one time, one and only one superior must delegate authority to a particular subordinate.
The advantage of observing this principle, while delegating authority is that the superior delegating authority would be in a position to exactly fix responsibility on the subordinate and hold him accountable for explaining his performance to the superior.
(iv) Principle of Scalar Chain:
Without going into an exposition of this principle, it would suffice to say that delegation of authority must take place via the scalar chain or the management hierarchy i.e. it must be the most immediate superior who delegates authority to the most immediate subordinate.
The observance of this principle for effective delegation is necessary; because the most immediate superior, is perhaps, in the best position to understand the competence of his most immediate subordinate; the results expected of him and the problems and difficulties likely to be faced by the latter.
Moreover, a free flow of communication is also facilitated between the superior and the subordinate – to facilitate better understanding between them and ensuring good performance and representation by the subordinate.
(v) Principle of Parity of Authority and Responsibility:
According to this principle, authority and responsibility are co-extensive terms i.e. the two go together. As such, the management slogan is more authority and more responsibility; less authority and less responsibility and finally, no authority and hence no responsibility.
Therefore, in the process of delegation of authority, this parity between authority and responsibility must be taken care of. No subordinate could be held responsible for showing those results for which no authority was granted to him.
And further, not to hold the subordinate responsible for results not produced by him for which authority had been provided to him; would mean allowing a gross abuse of powers by the subordinate.
(vi) Principle of Absolute Responsibility:
According to this principle, responsibility is something, which is fixed or absolute; and no superior, in any manner and to any extent can delegate his own responsibility to any of his subordinates. Even after delegation of authority, the superior delegating authority continues to be responsible to his own superior, for the total authority, originally possessed by the former.
No doubt, the superior delegating authority to a subordinate, can, in turn, hold the subordinate responsible for the proper and fair use of authority delegated to him.
Point of comment:
As a matter of fact, after having delegated authority to a subordinate, the responsibility of the superior increases; in as much as, the latter is responsible to his own superior-not only for his own acts but also for the performance of the subordinate who was delegated authority. |
Organic food production has a significantly higher environmental impact than intensive agriculture, a new study suggests.
Food that is free of pesticides and fertiliser is usually considered to be a more environmentally aware and responsible choice, however, the new study, published by the scientific journal Nature, ‘Assessing the efficiency of changes in land use for mitigating climate change’, suggests the opposite. Arguing that organic farming produces lower crop yields than traditional intensive agriculture due to avoidance of fertilisers and pesticides and that it requires greater landmass.
The study shows that organic peas, farmed in Sweden, have around a 50 % bigger climate impact than conventionally farmed peas. For other foodstuffs the impact is even greater, rising to 70 % for Swedish winter wheat.
As food production and the effect of agricultural expansion contribute to between 20-25% of global greenhouse gas emissions, future strategy, that satisfies the growing demand for food (set to increase 50% by 2050) needs to concentrate on maximising land efficiencies both from production and carbon-storing perspectives.
Farming organically has a beneficial effect on the local environment as intensive agriculture sees chemicals leach into soil and water table, upsetting the ecosystem. However, as food production is highly globalised, lower crop yields in a local area can indirectly affect land use in other parts of the world, contributing to deforestation.
It presents a conundrum for those wishing to make a difference with their food choices. Do you prioritise local environments over global ones? If we’ve learned anything from the recent impact of climate change, it’s that there effectively is no difference between the two, except in our own minds. What we do at a local level affects climate globally and major ‘climate events’ don’t respect borders, policy or political ecosystems.
It is more important than ever that an integrated approach to future agriculture emerges to ensure we form best practices for food production as demand increases. It is widely accepted today that the single best thing any one individual can do to offset carbon emissions is to reduce the amount of red meat they consume or avoid it altogether if possible. But our menu choices will become increasingly politicised in the future and just as we demand that our meat and fish is traceable, perhaps we should be asking the same of the fruit, vegetables and the grains that we consume.
We are moving towards an integrated and trustworthy supply chain with the likes of blockchain technology, even if real-world applications remain thin on the ground. However, one interesting application of the immutable ledger is a proposed tokenisation of carbon credits. Just as blockchain has been used to trade excess energy they produce on a microgrid, pioneered by LO3 Energy, the same principle could be applied to food producers and consumers. Projects to keep an eye on are IBM’s partnership with environmental Fintech company Veridium Labs Ltd. to tokenise carbon credits that can be used to incentivise companies to pollute less, while Ben & Jerry’s have partnered with non-profit Poseidon to allow consumers to offset their carbon footprint by applying a portion of their retail sales to purchase carbon credits. |
A Lucky Lunar Eclipse
Credit and Copyright: Andy Steere
Lunar eclipses are caused when the Moon passes through the Earth's shadow. Although dimmed, the eclipsed Moon may not appear completely dark. Sunlight scattered into the Earth's shadow after passing around the planet's edge and through its dusty atmosphere can make the Moon take on dramatic shades of red during totality as demonstrated in the above photo of the November 1993 lunar eclipse.
You can find this photo and other exciting images on the Astronomy Picture of the Day web site. Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. |
Teachers only teach what they were taught. Most teachers were taught traditional methods of direct instruction. Theorists like Dewey, Piaget, and Vygotsky focused on students being more responsible for their learning. Papert focused on constructivism where students make learning happen. Montessori focused on preschool children learning through play. The Reggio Emilio approach, started in Italy after World War II, encourages preschool children to proactively participate in discovery learning while adults chronicle their progress. Bruner focused his research on discovery learning where students are encouraged to learn on their own through action and experience.
Each student is unique. Students learn at different rates and have multiple learning styles and intelligences. (Garner’s Theory of Multiple Intellligences) When you look at how we teach, most teachers are still at the knowledge or remembering level of Bloom’s Taxonomy. (Andrew Church revision of Bloom’s Taxonomy below)
Teacher-center vs student-centered classroom
|Factory model||Inquiry model|
|Single Subjects and
grade level focus
real world applications
|Focused on Product||Focused on Process|
|Process- and product-oriented||Product-oriented|
|Short time on each concept||Block scheduling and
cross curricular activities
|Isolated teaching and learning||Collaborative activities for
students and teachers
|Rote knowledge||Experiential knowledge|
Students can be more involved in the decisions of how they learn and what they learn. If they are aware of the standards and tests they need to learn, they can even help teachers design activities that engage them and help them understand the concepts. Most students zone out after 1-2 minutes of anyone talking at them. If they are accountable for a presentation, song, skit, poster, or an exhibit, they take more ownership of the product. The product really doesn’t matter as much as the process, but to the students, it means alot. Writing an essay that only the teacher reads doesn’t mean anything to them. If their peers read their essays, that’s another story. If they have to do a showcase of their work and their parents or others in the school community see it, then they really care. If their work is published on the Internet, then they will work on it overtime, on breaks, after-school. The engagement factor explodes.
So the important piece here is to tie in any projects or student-centered activities with standards. As long as we use tests and standards as measurement of student achievement, we have to do this to show that this type of classroom works. Eventually, we need other means of assessment that are more authentic.
To move to this type of classroom takes time, patience, and being okay with taking risks and learning from failure. That is tough for today’s teachers that are accountable for scores based on standardized tests. What I suggest is to start slowly. If you are a teacher who wants to move in this direction, here are some steps you can take:
- discuss what you want to do with your administrator
- introduce the new Bloom’s Taxonomy to your grade level or department
- identify gaps in learning by analyzing student data
- choose one area where you can design one project or lesson that includes inquiry
- check the resources you have available first before you even start planning
- sit with a coach or colleague to redesign a lesson to include a more hands-on approach to learning
- ask your coach to model some of the strategies for you or explain to your students honestly what you want to do and ask for their help
- involve students in more of the design of questions and the type of products they will be presenting
A strong leader helps. If your students have access to computers and the Internet, they can work in groups. Talking about technology is another post. |
Bering Sea Dispute(redirected from Bering Sea Arbitration)
Also found in: Dictionary, Encyclopedia, Wikipedia.
Bering Sea Dispute
The Bering Sea Dispute involved a late nineteenth-century controversy between the United States on one side and Great Britain and Canada on the other side over the international status of the Bering Sea. The dispute was generated over the U.S. assertion that it controlled the Bering Sea and all seal hunting off the coast of Alaska. The dispute, which led to the seizure of a number of Canadian ships by the United States, was finally resolved by an international Arbitration in 1893.
The Bering Sea is the northernmost part of the Pacific Ocean. After the United States purchased Alaska from Russia in 1867, it assumed the right of control over the Bering Sea that had been held by Russia. The dispute arose after the Alaska Commercial Company, a U.S. business that had a Monopoly on killing seals for their furs, found that Canadian hunters were killing seals as they swam through the ocean each spring toward their summer homes in the Pribilof Islands. The Pribilof Islands were part of the U.S. Alaskan territories. Fearing that the herds would be killed off by pelagic (open-sea) sealing, the U.S. government seized several Canadian sealing vessels in 1886 and instituted condemnation proceedings in an Alaskan court. The proceeds were given to the Alaskan Commercial Company as compensation.
These actions outraged the Canadian and British governments, who disputed the U.S. claim that it controlled not just the three-miles of sea bordering the Pribilof Islands but the entire Bering Sea. After several years of tensions and additional vessel seizures, the three countries agreed to arbitration by an international tribunal in Paris. The tribunal issued its decision in 1893. It rejected the U.S. claim of total control of the Bering Sea and awarded the Canadian owners of the seized ships $473,000 in damages. The tribunal also imposed restrictions on pelagic sealing, but it failed to control the problem. In 1911 the United States, Great Britain, Russia, and Japan signed a treaty that prohibited pelagic sealing for a period of time and then placed limits on how many seals could be hunted. The agreement was an important step in seeking international consensus on environmental matters.
Gay, John Thomas. 1987 The American Fur Seal Controversy. New York: Peter Lang.
Mead, Walter Russell. 2002. Special Providence: American Foreign Policy and How It Changed the World. New York: Routledge. |
Jim Crow and the Progressives
Historians often speak glowingly about the Progressive movement of the late 19th and early 20th centuries. Typically they write off the racist statements made by many of its leaders—Herbert Croly, John Dewey, Teddy Roosevelt, Woodrow Wilson, and others—as minor “blind spots” unrelated to Progressivism. But perhaps the apologist historians also have trouble seeing clearly.
Writing in the summer issue of The Independent Review, Frostburg State University economists William L. Anderson and David Kiriazis argue that the Progressive “reforms” often enabled statutes aimed at restricting economic opportunities for African Americans. Progressivism, in other words, helped give birth to Jim Crow.
“These two sets of laws complemented each other as the regulatory regimes created economic rents that whites could exploit, and Jim Crow laws helped ensure that whites would not have as much competition for those rents,” Anderson and Kiriazis write.
One especially pernicious example, the economists explain, involved medical licensing. In 1910, seven medical schools were geared toward training African Americans. But after Progressive reformers pushed for standards favored by the (whites-only) American Medical Association only two remained. The school closures led to fewer black physicians available to serve their communities. And by restricting competition for medical services, the closures also boosted the incomes of white doctors.
Similar patterns and outcomes afflicted other professions. “From attempts to block out migration of labor to laws favoring labor unions, and from professional licensing to the Davis-Bacon Act and minimum-wage laws, Progressives enacted rules and legislation that paralleled Jim Crow laws in their effects,” Anderson and Kiriazis conclude.
See “Rents and Race: Legacies of Progressive Policies,” by William L. Anderson and David Kiriazis (The Independent Review, Summer 2013) |
The deforestation of the Amazon Rainforest is amongst the world's worst environmental disasters.
Since the beginning of human history, people have cut down trees to make way for agricultural or commercial production, the construction of houses or to supply the required demands for timber.
Deforestation in Latin America results from problems associated with overpopulation, which has been occurring since the mid-twentieth century. The tropical rainforest is fading away quickly and is at risk of disappearing forever.
Since colonization, which started in the 1960s, deforestation was a threat to the Amazon rainforest. It once covered 14% of the Earth’s surface in land, but today it barely covers 6%.
Experts believe that the rainforest could disappear in a matter of forty years. Also, approximately half of the world’s animal and plant life will become extinct or severely endangered in the next 25 years.
The Amazonian rainforest is essential to the health of the planet and its inhabitants.
It is a storehouse of plant and animal species which represent a vital source of biodiversity. Over a quarter of all pharmaceutical products come from rainforest produce.
They have provided treatments for diseases like leukemia, Hodgskin’s disease, snake bites, along with breast, cervical and testicular cancer, and are presently used in research for a possible treatment for AIDS.
The Amazon rainforest plays a fundamental role in the overall health of the planet by helping regulate climate, hydrological services, carbon sequestration, fire protection, pollination, and disease.
It provides about 20% of the world’s supply of oxygen and absorbs large amounts of carbon dioxide.
It represents approximately 54% of the rainforest left on Earth and is one of the most important ecosystems in the world.
Over half of all plant and animal life on Earth lives in the Amazon rainforest. It is also home to many different tribes of indigenous people.
Global warming is not only a consequence, but also a cause of deforestation.
The overall global warming caused by various human activities such as factory production and transportation contribute to the deforestation affecting the Amazonian rainforest.
Latin America, like anywhere else in the world, is suffering from the effects of climate change.
In fact, the climate of the Amazon rainforest is changing drastically because of the increased levels of carbon dioxide in the air.
Since the dawn of the industrial revolution, carbon dioxide concentrations in the atmosphere have increased by over 40% because of the pollution caused by automobiles, industries, and many other human activities.
This causes a problem to the indigenous people whose livelihood depends on the natural resources provided by the Amazon rainforest.
In the Andes, several glaciers are quickly melting, due to the increase in surface temperatures.
Large blocks of ice from the Antarctic are breaking off, which increases the water levels of the ocean. This in turn influences the water cycle of the Amazon River which is connected to the Atlantic Ocean.
Every year, the water level of the Amazon River rises over thirty feet and floods the nearby forests.
Over the last few years, it flooded more forests then this natural phenomenon was used to, causing deforestation.
With global temperatures increasing because of pollution in the atmosphere, it will result in even more rainforest floods.
Researchers believe that the increase in carbon dioxide concentrations is affecting plant life in the Amazon rainforest.
Since plants use the carbon dioxide in the air for photosynthesis, this increase in carbon dioxide is fertilizing the vegetation to the point where plants are lacking room and have to compete for soil, light, and water.
The larger and faster growing trees have an advantage over the smaller trees. This change in tree growth causes deforestation, mostly because the younger and smaller trees do not survive as easily.
As a result, the forest is no longer growing the way it is supposed to, and less trees are being created. At this rate, it looks as if one day there will no longer be a new generation of trees to replace the previous one.
The Amazon rainforest reacts in times of great rain by absorbing water which it then stores for later use during the dry season.
Unfortunately, as the environment gets drier, the trees cannot hold in enough water to survive.
It is predicted that there will be a 2 to 8 degree Celsius rise in average global surface temperatures in the next century, and this may eventually cause the rainforest to be replaced by dry tropical grassland and bare soils.
The Amazon rainforest is especially at risk of accidental forest fires during the dry season. Due to global warming, the plants are getting drier making them most vulnerable to fire.
During the drier conditions, especially during an El Niño year (a cyclical disruption of the ocean-atmosphere system), fields used for agriculture could easily catch fire and spread into the rainforest nearby.
In fact, the majority of forest fires occur during an El Niño year. The strong El Niño years of 1997 and 1998 contributed to enormous forest fires, causing over 400,000 square kilometers of forest to go up in smoke.
The fires not only destroy the forest, but also kill wildlife and discharges even more carbon into the atmosphere (this carbon binds with oxygen to form carbon dioxide) causing more global warming, making it a vicious cycle.
If these changes in temperature, droughts, and forest fires continue to
increase as they do, then there will be a dry deserted land where the
Amazon rainforest once stood.
The red dots depict forest fires and a large cloud of smoke can be seen (bottom left)
One of the key players in the deterioration of the Amazon rainforest is the usage of the land by cattle ranchers. Livestock occupies about 70% of the converted forest (see meat industry for more information on the effects of cattle ranching).
The cattle ranching system is based on grazing. It relies on cultivated and native pastures, which are used for grazing all year round.
Beef production in the tropical rainforest is well-known for its poor productivity. This is due to the over-exploitation of the grasslands, poor management, and the low fertility of the soil, which contains low levels of phosphorus and high acidity.
Brazil contains over two hundred million heads of cattle making it the world’s second largest herd, with India currently ranking as the first. Over a third of its herd is located in the central-west of Brazil, predominantly in the state of Manto Grosso, which contains 13% of Brazil’s herds.
The production of cattle has increased over the last fifteen years. The production areas have also moved from the south-east to the north of Brazil. This is mainly due to the displacement of ranchers because of the expansion of soy. From 2001 to 2005, the cattle herd increased by 37% in the northern regions of Brazil.
Timber extraction is one of the primary forms of deforestation, and contributes greatly to the economic development of the nation.
In spite of the improved logging techniques and awareness for the endangered rainforest, logging still happens in the Amazon. Unfortunately, forty percent of logging that occurs in the area is assumed to be illegal.
In Brazil, much of the wood is used domestically. In 1998, 14% was exported and by 2004, 36% was exported. It is estimated that 10,000 to 15,000 kilometers square of forest are exploited each year by logging.
Part of the logged forest gets converted to agricultural and pasture land. Most of it remains as logged forests, with no use. Unorganized logging creates larger spaces between the canopy trees making the forest more prone to natural fires, which usually start in agricultural areas and pastures.
The Brazilian Amazon is a dream come true for many gold-diggers who seek fortune. It contains a wide variety of minerals such as bauxite, diamonds, gold, iron, oil, and ore.
Mining is linked with the degradation of ecosystems caused by soil erosion, runoff, infrastructure development, and environmental pollution.
Brazil contains a long history of gold mining with numerous people working in dangerous conditions. For example, between 1550 and 1880, gold mining has released more than 200,000 metric tonnes of mercury into the environment.
Sadly, mercury takes a long time to degrade, and while it remains in the environment, it affects the health of anything exposed to it.
Today, Latin America earns the largest share in mining profit. Brazil, Chile, and Peru are the main mining countries.
Unfortunately, mining in forests causes deforestation and also releases chemicals that pollute the rivers located nearby mines as well as farmland downstream.
Deforestation in the Amazonian rainforest is caused by global warming, colonization and economic development.
People should not have to resort to settling in land that is unsuitable. Clearly, they should not exploit the land, but try to conserve it instead.
If things continue as they do, the Amazon rainforest will only exist in history books. It is our duty to promote awareness of the situation and to try to educate people in order to make a difference.
By taking action to stop climate change, we can save the Amazon rainforest for future generations.
Special thanks to Cynthia Cousineau for the great research involved in this article
Have your say about what you just read! Please, leave a comment below. |
While a lot of people have probably heard about caterpillar Cocoons, they may not know for sure just what these are. Basically, they are nothing more than a protective casing that is around an insect. This is made of either silk or some other similar fibrous material that is then spun around the the insect during their pupal stage, which is the life stage of an insect that is undergoing transformation. While the most common type of Cocoon are those that are found around butterflies or moths, the egg case of a spider is also a type of Cocoon.
Usually an insect will enter into a Cocoon so that they will be protected from a harsh or unfriendly environment. This is why, most of the time insects will spend the wintertime in their Cocoons. So, as the days get shorter and cooler in the fall, these insects will start to spin a silky envelope around themselves. They will then retreat into this Cocoon and spend the winter without the need for food or water.
You may be wondering just how these Cocoons are made. Well, they are actually made of silk. This silk is spun from 2 glands that are located inside of an insect. These glands are filled with a material that is thick and glue-like. An insect will then work in a figure 8 in order to wrap themselves up inside of this silk. This material is pressed out of the insect's 2 slender threads. These threads will then stick together as they emerge and then grow hard when fresh air touches them.
This is a very interesting process because it has oftentimes been said that the most beautiful butterflies have actually emerged from the ugliest Cocoons. For this reason, many people consider the process of the Cocoon to be a miracle of nature itself. |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Main arguments Edit
Criticism of previous education theories Edit
Egan argues that the whole of educational theorizing pivots around three basic ideas of what the aim of education should be:
- the need to educate an elite of the population in subjects that are important academically. (Plato). Here we also find the ideas: reason and knowledge can provide a privileged access to the world; knowledge drives the student mind development; education is an epistemological process.
- the right for every individual to pursue their own educational curriculum through self-discovery (Rousseau). Here we also find the ideas: the student development drives knowledge; education is a psychological process.
- Socialize the child - homogenize children and ensure that they can fulfill a useful role in society, according to its nation values and beliefs.
Egan argues in chapter one that, "these three ideas are mutually incompatible, and this is the primary cause of our long-continuing educational crisis"; the present educational program in much of the West attempts to integrate all three of these incompatible ideas, resulting in a failure to effectively achieve any of the three.
Following the natural mind development Edit
Egan's proposed solution to the education problem which he identifies is to: let learning follow the natural way the human mind develops and understands. According to Egan, individuals proceed through five kinds of understanding:
- Somatic - before language acquisition) the physical abilities of one's own body are discovered; somatic understanding includes the communicating activity that precedes the development of language; as the child grows and learns language, this kind of understanding survives in the way children "model their overall social structure in play".
- Mythic - concepts are introduced in terms of simple opposite (e.g. Tall/Short or Hot/Cold). Mythic understanding also includes comprehending the world in stories.
- Romantic - the practical, realistic limits of the mythic concepts are discovered. Egan equates this stage with the desire to discover examples of superlatives (e.g. 'What is the tallest/shortest a person can be?') and to the accumulating of extensive knowledge on particular subjects (e.g. stamp collecting). So this kind of understanding includes "associations with the transcendent qualities of heroes, fascination with the extremes of experience and the limits of reality, and pervasive wonder".
- Philosophic - the discovery of principles which underly patterns and limits found in romantic data; we order knowledge into coherent general schemes.
- Ironic - it involves the "mental flexibility to recognize how inadequately flexible are our minds, and the languages we use, to the world we try to represent in them"; it therefore includes the ability to consider alternative philosophic explanations.
"Drawing from an extensive study of cultural history and evolutionary history and the field of cognitive psychology and anthropology, Egan gives a detailed account of how these various forms of understanding have been created and distinguished in our cultural history".
Each stage includes a set of "cognitive tools", as Egan calls them, that enrich our understaning of reality. Egan argues that recapitulating this stages is necessary to overcoming the contradictions between the Platonic, Rousseauian and socialising goals of education.
Egan resists the suggestion that religious understanding could be a further last stage, arguing instead that religious explanations are examples of philosophic understanding.
Egan main influences come from the Russian psychologist Lev Vygotsky. The idea of applying theory of recapitulation to education came from 19th century philosopher Herbert Spencer, although Egan uses it in a very different way. Egan also uses educational ideas from William Wordsworth and expresses regret that Wordworth's ideas, because they were expressed in poetry, are rarely considered today.
(some contentious (NPOV) text moved to Talk page)
It is possible to draw parallels with Piaget's stages of development; 'somatic' is a combination of the Sensorimotor stage and the Preoperational stage. 'Mythic' is the Concrete operational stage and 'Philosophic' and 'Ironic' are elaborations of the Formal Operational stage[How to reference and link to summary or text]. [dubious — see talk page]
In popular cultureEdit
The same year the essay was published (1997), Italian comedian-satirist Daniele Luttazzi used Egan ideas for his character Prof. Fontecedro in the popular TV show Mai dire gol, aired on Italia 1. Fontecedro was satirizing the inadequacies of Italian school system, and the reforms proposed by Luigi Berlinguer, 1996-2000 Ministry of Education of Italy. Fontecedro' sketches brought Kieran' theory to extreme levels with surreal humor. The jokes were later published in the book Cosmico! (1998, Mondadori, ISBN 88-04-46479-8), where the 5 stages mind development is also cited at pp. 45-47.
- ↑ Kieran Egan (1997). The educated mind (introduction), Chicago: University of Chicago Press. ISBN 0-226-19036-6.
- ↑ 2.0 2.1 D. James MacNeil, review of The educated mind, for the 21st Century Learning Initiative, September 1998
- ↑ Theodora Polito, Educational Theory as Theory of Culture: A Vichian perspective on the educational theories of John Dewey and Kieran Egan Educational Philosophy and Theory, Vol. 37, No. 4, 2005
previous works on ironic knowledge:
- Bogel, Fredric V. "Irony, Inference, and Critical Understanding." Yale Review 69 (1980): 503-19.
- Kieran Egan (1997). The educated mind: How Cognitive Tools Shape Our Understanding, Chicago: University of Chicago Press. ISBN 0-226-19036-6.
- Book dedicated section in Egan official website
- Conceptions of Development in Education, Egan's essay that explain the main ideas of the book
- From the Imaginative Education Research Group: A brief guide to imaginative education, Some thoughts on "Cognitive tools", Cognitive tools that come along with oral language, Cognitive tools that come along with literacy, Cognitive tools that come along with theoretic thinking
- Excerpts from google books
- Using entheogens as cognitive tools to foster Somatic and Mythic types of understanding
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
Estimated Population at Risk:
Lead processing and smelting plants work with both primary and secondary lead. Primary lead is mined, separated from ore, and refined into various products, whereas secondary lead is recovered from used objects – such as used lead-acid batteries – for reuse in other products. Smelting is a key process in lead product production, and involves heating lead ore or recovered lead with chemical reducing agents. Both secondary and primary smelting processes can be responsible for releasing large amounts of lead contamination into the surrounding environment.
Populations estimates are preliminary and based on an ongoing global assessment of known polluted sites.
Lead processing either requires the mining of new, primary lead, or the recycling of used products and scrap metals. Both forms of lead must be melted using a smelting process in order to obtain pure and usable forms of the metal.
The primary smelting process involves separating lead from ore using heat and reducing or purifying agents such as coke and charcoal. Once the lead ore is mined, it must undergo several different processes in order to be turned into usable or metallurgical lead material: sintering, smelting, and refining. The sintering phase involves removing sulfur from the lead ore using a hot air combustion process. Once the sulfur is removed, the lead is sent into a smelter where it is heated at extremely high temperatures in order to isolate the pure lead from other metals and materials in the ore. Any remaining metals or other materials left after the smelting are removed during the refining process.29 Lead dust and smoke can be released during all of the above processes, and slag contaminated with lead particles may be left over after the smelting process.
Secondary smelting of lead is similar to primary smelting, but does not require the initial sintering process. Once lead is recovered from used materials – with the majority coming form used lead-acid batteries – it is placed into a furnace where it is heated with coke or charcoal in order to isolate the lead from other compounds. Like primary lead smelting, the processing of secondary lead can also produce lead dust and toxic slag. If smelting plants and equipment are not properly constructed to minimize the release of pollutants, lead toxins can often enter the surrounding environment and contaminate soil, water, and food.
In addition, the mining process for extracting primary lead ore – if not performed with the necessary safety and environmental precautions – can create large piles of waste that contains lead toxins. If these piles are left out in the open, lead dust can be blown into surrounding areas, and lead can also leach into the ground and contaminate water systems.
Lead is a very useful material found in many different products, with approximately six million tons used annually across the world.30 Though much of this lead is recycled and reused, the US Geological Survey estimated that the world production of primary lead in 2009 was over 3.8 million metric tons.31 The extraction and smelting of lead can cause a large amount of toxic pollution, and emissions from lead smelting are a big contributor to global lead contamination.32 Lead smelting can also pollute the environment with large amounts of particulate matter, toxic effluents, and other various solid wastes.
Though lead smelting facilities exist all over the world, countries and cities where pollution may not be properly monitored by environmental and health regulations are more negatively impacted by health problems related to lead contamination. According to the Blacksmith inventory, countries in Eastern Europe, Northern Eurasia, and Central Asia are particularly at risk from lead smelting activities, with an estimated two million people impacted worldwide.
The most common route of lead exposure caused by lead smelting is through inhalation or ingestion of lead dust, particles, or exhaust from the burning process. Workers in the smelting factories are particularly at risk, as they can be exposed to prolonged and direct inhalation of gaseous emissions and dust. Particles and ash containing lead can also be blown into nearby towns or onto agricultural fields, which can contaminate livestock and crops. Studies in China have found that certain crops, such as corn, are particularly susceptible to lead accumulation when grown in close proximity to smelters.34 Dermal contact with soil contaminated with lead can also expose people to toxins.
In addition to toxic emissions, lead smelting produces wastewater, solid waste, and slag heaps that may be contaminated with heavy metal. Lead from these sources, as well as waste rock from lead ore mining, can often make its way into ground and surface water systems that are used for drinking, bathing, and cooking.
The health effects of exposure to lead can be both acute and chronic, and the problems caused by lead poisoning are particularly dangerous and severe for children. Acute lead poisoning can happen immediately and is often caused by inhaling large quantities of lead dust or fumes in the air. Chronic lead poisoning, however, occurs over longer periods of time and can result from very low-level, but constant, exposure to lead. Chronic poisoning is far more common than acute exposure and can be caused by persistent inhaling or ingestion of lead, or, over much longer periods, can result in lead accumulation in the bones.
Health problems associated with lead poisoning can include reduced IQ, anemia, neurological damage, physical growth impairments, nerve disorders, pain and aching in muscles and bones, memory loss, kidney disorders, retardation, tiredness and headaches, and lead colic, which impacts the abdomen.19 Severe exposure to high concentrations of lead can lead to dire health risks, including seizures, delirium, coma, and in some cases, death.
Neurological damage is especially pronounced in children suffering from lead exposure, with even small amounts of lead poisoning capable of causing lifelong developmental and cognitive problems. Exposure to lead in utero can also cause birth defects.
What is Being Done
Many modern and well-maintained lead smelting facilities have infrastructure in place that allow pollution levels to be controlled and monitored according to environmental and health standards. However, these kinds of operations can be quite expensive, which leads many smelting plants to forego important safety measures. This is particularly common in countries or areas where there is little to no government regulation of the industry. Thus, one of the most effective ways to reduce lead pollution from smelters is to work with governments, NGOs, and communities to update equipment and operations at the plants.
Several of the older lead smelters, some of which have been in operation for many decades, have created large areas of legacy pollution. Remediation efforts in these areas have to consist of both the removal and disposal of contaminated soil or material and to ensure that contaminated water and food are able to return to safe consumption levels.
Example – DALY Calculations
A city in northwestern India is home to a large smelter that is releasing lead toxins into the nearby environment. There are many lakes in this region, and lead has contaminated the drinking and bathing water for the nearby residents. Samples of water near the smelter found 430 parts per billion of lead, which is over 8 times the health standard. Blacksmith estimates that 3,000 people at this site are at risk of diseases caused by lead exposure.
DALYs associated with adverse health impacts from lead exposure at this site are estimated to be 47,917 for the estimated exposed population of 3,000. This means that the 3,000 affected people will have a collective 47,917 years lost to death, or impacted by disease or disability. This comes out to 16 years lost or lived with a disability per person.
: “New Basel guidelines to improve recycling of old batteries.” United Nations Environment Programme. May 22, 2002.
Available at: http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=248&ArticleID=3069&I=en.
: “Primary Metals: Lead Processing.” Illinois Sustainable Technology Center, Prairie Research Institute. Accessed on August 31, 2011.
Available at: http://www.istc.illinois.edu/info/library_docs/manuals/primmetals/chapter6.htm.
: Roberts, H. “Changing Patterns in Global Lead Supply and Demand.” Journal of Power Sources, Vol. 116, No. 1-2 (2003): 23-31.
: Kelly, Thomas D., et al. “Historical Statistics for Mineral and Material Commodities in the United States: Lead.” U.S. Geological Survey Data Series 140, Version 2010. Available at: http://minerals.usgs.gov/ds/2005/140/#lead.
: Dudka, S. and D.C. Adriano. “Environmental Impacts of Metal ore Mining and Processing: A review.” Journal of Environmental Quality, Vol. 26, No. 3 (1997): 590-602.
:“Environmental, Health, and Safety Guidelines: Base Metal Smelting and Refining.” International Finance Corporation. World Bank Group. April 30, 2007.
Available at: http://www.ifc.org/ifcext/enviro.nsf/AttachmentsByTitle/gui_EHSGuidelines2007_SmeltingandRefining/$FILE/Final+-+Smelting+and+Refining.pdf.
: Xiangyang, Bi, et al. “Allocation and Source Attribution of Lead and Cadmium in Maize (Zea mays L.) impacted by smelting emissions.” Environmental Pollution, Vol. 157, No. 3 (2009): 834-839. |
The fossilized remains of Stone Age people recovered from two caves in southwest China may belong to a new species of human that survived until around the dawn of agriculture.
The partial skulls and other bone fragments, which are from at least four individuals and are between 14,300 and 11,500 years old, have an extraordinary mix of primitive and modern anatomical features that stunned the researchers who found them.
Named the Red Deer Cave people, after their apparent penchant for home-cooked venison, they are the most recent human remains found anywhere in the world that do not closely resemble modern humans.
The individuals differ from modern humans in their jutting jaws, large molar teeth, prominent brows, thick skulls, flat faces and broad noses. Their brains were of average size by Ice Age standards.
“They could be a new evolutionary line or a previously unknown modern human population that arrived early from Africa and failed to contribute genetically to living east Asians,” said Darren Curnoe, who led the research team at the University of New South Wales in Australia.
“While finely balanced, I think the evidence is slightly weighted towards the Red Deer Cave people representing a new evolutionary line. First, their skulls are anatomically unique. They look very different to all modern humans, whether alive today or in Africa 150,000 years ago,” Curnoe said.
“Second, the very fact they persisted until almost 11,000 years ago, when we know that very modern looking people lived at the same time immediately to the east and south, suggests they must have been isolated from them. We might infer from this isolation that they either didn’t interbreed or did so in a limited way,” he said.
One partial skeleton, with much of the skull and teeth, and some rib and limb bones, was recovered from Longlin Cave in Guangxi Province. More than 30 bones, including at least three partial skulls, two lower jaws and some teeth, ribs and limb fragments, were unearthed at nearby Maludong, or Red Deer Cave, near the city of Mengzi (蒙自) in Yunnan Province.
At Maludong, fossil hunters also found remnants of various mammals, all of them species still around today, except for giant red deer, the remains of which were found in abundance.
“They clearly had a taste for venison, with evidence they cooked these large deer in the cave,” Curnoe said.
The findings are reported in the journal PLoS ONE.
The Stone Age bones are particularly important because scientists have few human fossils from Asia that are well described and reliably dated, making the story of the peopling of Asia hopelessly vague. The latest findings point to a far more complex picture of human evolution than was previously thought.
“The discovery of the Red Deer Cave people shows just how complicated and interesting human evolutionary history was in Asia right at the end of the ice age. We had multiple populations living in the area, probably representing different evolutionary lines: the Red Deer Cave people on the East Asian continent, Homo floresiensis, or the ‘Hobbit,’ on the island of Flores in Indonesia, and modern humans widely dispersed from northeast Asia to Australia. This paints an amazing picture of diversity, one we had no clue about until this last decade,” Curnoe said.
Much of Asia was also occupied by Neanderthals and another group of archaic humans called the Denisovans. Scientists learned of the Denisovans after recovering a fossilized little finger from the Denisova cave in the Altai mountains of southern Siberia in 2010. |
The primary science National Curriculum says that children should ‘develop their understanding of scientific ideas by using different types of scientific enquiry to answer their own questions, including observing changes over a period of time, noticing patterns, grouping and classifying things, carrying out simple comparative tests, and finding things out using secondary sources of information.’
Most teachers are very familiar with the notion of carrying out fair tests, but I wonder how many of them plan for observing over time, or do it anyway without realising it. In their very helpful book ‘It’s not fair – or is it? a guide to developing children’s ideas through primary science enquiry’, Turner et al, share the following pointers for observing over time:
- Observing over time helps us identify and measure events and changes in living things, materials and physical processes and events.
- Observations may take place over time spans from minutes or hours, to several weeks or months.
- Observing over time provides opportunities for children to be actively involved in making decisions about what and how to observe and measure, and the best ways to record the changes that occur.
- These types of enquiries provide rich contexts for children to learn about the importance of cycles, systems, growth and decay, and other types of changes.
As the signs of spring are getting stronger everyday, and because I do love a daffodil, I thought I’d take the opportunity to do a little bit of observing over time with a 99p bunch of daffodils from the local supermarket. We watched these little beauties changing over a period of 12 days, taking 1 photo a day, and using PicCollage to put our timeline together. I think the result is great! It really shows how the daffodils have changed over time. It would be interesting to continue to watch what happens after day 12 if they were placed on the compost heap.
As science week draws ever closer and teachers are rushing to find things to do, why not suggest this simple little activity? It provides for a multitude of scientific skills e.g. observing, describing, comparing, measuring (if you want to), predicting, explaining and communicating, all for the price of 99p, and the effort of buying some daffodils and sticking them in a jam jar at the front of the class. Take one picture a day, get the children to tell you what they notice, write it on the IWB alongside the photo, and make a class big book at the end of the week/fortnight.
Why not make a whole school project of it and show progression in science across school? It would make a great display to have pupil paintings of daffodils, alongside photos, comments, predictions, descriptions and explanations. For a bit of secondary research, why not find out more about daffodil growing in the UK?
And don’t forget, a bit of Wordsworth is always good… |
How does a historic site interpret a past that is still very much the Ml subject of study and research? At Riverside, the Farnsley- Moremen Landing in Louisville, by Kentucky, we chose to invite some of our youngest visitors to participate in that process. And, we developed a couple ot simple activities to round-out their experience.
Riverside opened to the public in October, 1993. Its centerpiece is a recently-restored farm house built circa 1837 on a beautiful stretch of the Ohio River. The museum was organized to interpret historic farm life on the river. However, all of the outbuildings, such as the barn, smoke house, wash house, ice house, and detached kitchen, were lost years ago to benign neglect. The staff continues research into documentary sources and oral histories while extensive archaeology is being conducted to learn more about the outbuildings and life on the farm. Long-range plans call for the eventual reconstruction of outbuildings.
As excited as we were about the research in progress, our staff and volunteer guides faced difficult questions as we tried to help elementary students find meaning in the incomplete farm site. How could we help children understand that this was a farm if only the house survives? How could children gain an understanding Of what archaeology is? How could they appreciate the valuable role artifacts, documents, and photographs play in interpreting the past? In response to these questions, we developed a full-day field trip called “The Building Blocks of History” with help from archaeologist Jay Stottman of the Kentucky Archaeological Survey.
Before students start digging at the site of a long-lost outbuilding, we want them to get a sense of the big picture. A brief question and answer period led by an archaeologist gives the children an opportunity to learn what archaeologists do. The archaeologist also asks the children to think about how the particular site explain that they are not digging with random abandon. Students are able to see that careful attention to the level and context of the artifacts often reveals important information about the artifacts age and use. Although each “Building Blocks” participant gets to take a trowel in hand and dig, they also screen and wash artifacts. It time permits, they work with an archaeologist to do a preliminary sort of the artifacts recovered. Before they was chosen for excavation. With the right guidance, students frequently offer the sources of information that were indeed used to locate the site: maps, photographs, old documents, and family stories.
We want participants to come away with an idea of the methods used in archaeology. Their guides leave the site, students learn that the artifacts they found will be analyzed and the findings written up in a report. We also stress that the artifacts found will wind up in our museum and not in a private individual’s hands.
Like all visitors to Riverside, participants in “Building Blocks” tour the historic Farnsley-Moremen House. Tour guides in the house ask children open-ended questions to help them make connections to the missing elements of the farm. For example, they ask, “Why do you think the kitchen was a separate building?” In addition, guides point out family furnishings, documents, and photographs on the tour. They encourage students to share ideas about what these artifacts may reveal about the lives of the people who called Riverside home. This reinforces the notion that what we know about the past is based on how we interpret what has survived into the present.
Finally, participants in “Building Blocks” get their hands and bodies moving once more by working clay into small quick-drying bricks that are taken home at the end of the day. The children learn about an important artifact left behind by Gabriel Farnsley, the builder of the farm house. Farnsley etched his name into the wet clay of a brick before it was fired. The brick with Farnsley s signature was discovered in the cornice of the house during restoration.
We encourage each student to etch his/her own name — or a message onto their bricks. Their guides ask them what someone who might find their brick in the future could learn through that artifact. Students are also asked to think about how our knowledge of the past builds through the addition of information, just as our house was built brick by brick.
“Building Blocks” is giving Riverside a chance to involve students in research critical to interpreting the history of the farm. Participants come away with a better understanding of the process of interpreting the past— and a better understanding of the history of the site. Also important, we are building an audience that will revisit Riverside as the years go by to see how the outbuilding reconstruction has progressed and how the interpretation of the site has evolved. These participants literally helped to uncover some of the information and they are helping us to build our future.
Patti Linn has been the Site Manager of Riverside, the Farnsley-Moremen Landing in Louisville, Kentucky, since 1 994. She holds a Master of Arts in Teaching from the University of Louisville and a Bachelor of Arts from Murray State University. Ms. Linn has experience as both a public school teacher and a museum educator.
Linn, Patti. “Building Blocks of History,” The Docent Educator 7.3 (Spring 1998): 18-19. |
Genetic Engineering allows scientists to cut existing DNA into fragments and manipulate it with additional segments. This has lead to groundbreaking breakthroughs in medicine that may lead to the eradication of cancers, viruses like HIV/Aids and TB. Manipulating DNA demonstrates how restriction enzymes and gel electrophoresis are used to manipulate DNA in genetic engineering experiments. Using well known bacteria, students practice cutting the DNA fragments with two well known restriction enzymes to analyze the outcomes. With practical demonstrations students will observe and learn techniques for DNA splitting and predict the activity of DNA under certain conditions. |
A massive five-sided edifice, Fort Pulaski was constructed in the 1830s and 1840s on Cockspur Island at the mouth of the Savannah River. Built to protect the city of Savannah from naval attack, the fort came under siege by Union forces in early 1862 and was ultimately captured on April 11.
FollowingWar of 1812, the U.S. government began planning a system of coastal fortifications to defend the nation's coast against foreign invasion. Because Savannah was the major port in Georgia, navy officials recognized the need for a fort on Cockspur Island to protect the city from attacks coming up the Savannah River. In 1829 construction began on the new fort, named for Count Casimir Pulaski, a Polish immigrant who fought during the American Revolution (1775-83).
In January 1861, shortly before Georgia seceded from the Union, state troops occupied Pulaski to keepMacon and Savannah formed the garrison. Helped by slaves impressed from nearby rice plantations, these men cleared the moat and began to mount guns along the fort's walls. By the time Colonel Charles H. Olmstead took command of Pulaski in December 1861, its defenses had improved dramatically.
Fort Pulaski faced its first threat during the Civil War (1861-65) in November 1861, following the capture of nearby Port Royal, South Carolina, by Union forces. General Robert E. Lee, Tybee Island and other islands near the fort abandoned because they could not be adequately defended. Lee believed, however, that Fort Pulaski's wide walls would keep it from serious harm by any bombardment from Tybee, nearly a mile away.
In January 1862 the Union commander in the district, General William T. Sherman, decided to take the fort by siege. He ordered troops to Tybee Island and constructed defenses on the smaller neighboring islands to cut the garrison from reinforcements. Sherman then placed Captain Quincy Gillmore of the Engineer Corps in charge of the siege preparations on Tybee, despite advice that "you might as well bombard the Rocky Mountains."
Gillmore ordered his engineers to construct a series of eleven artillery batteries along the north shore of Tybee Island. They worked mostly at night and camouflaged the work on the batteries to prevent the fort's garrison from discovering their plans. Once the batteries were built, the troops had to pull, by hand, artillery pieces weighing as much as 17,000 pounds through marshy land and into position.
By April 9, Gillmore had twenty cannons and fourteen mortars in position to bombard Fort Pulaski. Just after sunrise the next morning
On April 11, the Union bombardment opened two thirty-foot holes in the southeast face of Pulaski.
The reduction and capture of Fort Pulaski in 1862 not only deprived the Confederacy of a port it desperately needed but also signaled a major shift in the way future forts would be built as well as the way they would be attacked. Captain Gillmore took a risk when he decided to assault the fort with the new rifled cannons, but his gamble paid off and led to significant changes in military engineering.
Following the surrender, Union troops garrisoned Fort Pulaski until the end of the war. During this period the fort served not only to bar Confederate shipping from Savannah but also to imprison captured Southern troops. After the Civil War (1861-65), the U.S. Army Corps of Engineers began modernizing the fort but stopped before the project was completed. Pulaski remained virtually abandoned until 1924, when the government designated it a national monument. Nine years later it became a unit of the National Park Service, which continues to maintain it.
Media Gallery: Fort Pulaski |
Computing in a Parallel Universe
Multicore chips could bring about the biggest change in computing since the microprocessor
It is surely no coincidence that the kinds of parallelism in widest use today are the kinds that seem to be easiest for programmers to manage. Instruction-level parallelism is all but invisible to the programmer; you create a sequential series of instructions, and it's up to the hardware to find opportunities for concurrent execution.
In writing a program to run on a cluster or server farm, you can't be oblivious to parallelism, but the architecture of the system imposes a helpful discipline. Each node of the cluster is essentially an independent computer, with its own processor and private memory. The nodes are only loosely coupled; they communicate by passing messages. This protocol limits the opportunities for interprocess mischief. The software development process is not radically different; programs are often written in a conventional language such as Fortran or C, augmented by a library of routines that handle the details of message passing.
Clusters work well for tasks that readily break apart into lots of nearly independent pieces. In weather prediction, for example, each region of the atmosphere can be assigned its own CPU. The same is true of many algorithms in graphics and image synthesis. Web servers are another candidate for this treatment, since each visitor's requests can be handled independently.
In principle, multicore computer systems could be organized in the same way as clusters, with each CPU having its own private memory and with communication governed by a message-passing protocol. But with many CPUs on the same physical substrate, it's tempting to allow much closer collaboration. In particular, multicore hardware makes it easy to build shared-memory systems, where processors can exchange information simply by reading and writing the same location in memory. In software for a shared-memory machine, multiple computational processes all inhabit the same space, allowing more interesting and flexible patterns of interaction, not to mention subtler bugs. |
Properties of Fluids
This lab explores the properties of water flow. This inquiry-based lab consists of 2 sections: the Guided Lab Activity and the Going Further (research) portion. The guided lab activity, performed on the first day, is designed to help students observe and understand the way fluids interact with a stationary phase for example, chromatography paper. As the students do their guided lab activity, they will be responsible for generating a minimum of three questions related to the lab activity that would require further research. The Going Further (research) portion should be used on the second day as an inquiry-based follow-up to part 1. The purpose of this portion is for students to get some answers to their chromatography questions. Each student will choose one of their generated questions and research the answer to it. Students may end up researching the properties of liquids, the different types of liquids used in food, the body, the medical field, and in industry, as well as how chromatography and nanotechnology link together (for example, nano-liquid chromatography is used in “lab-on-a-chip” technology). From their research, students should to be able to devise a new experiment that will allow a deeper understanding of the material; this exercise could be done for extra credit.
Part 2 explores the properties of water flow. Students will observe capillary action through the use of capillary glass tubes and colored water. The students will need to determine the best way to hold the glass tubes so that the water travels upward just like it does in plant stems. When things are very small, gravity may not seem to work. Students will come to the understanding that some forces can override gravity in certain cases. They will be introduced to a branch of nanotechnology called microfluidics, which uses microchannels to direct fluid flow. This lab connects with the Big Ideas in Nanoscale Science and Engineering (Stevens et al, 2009 NSTA Press); Big Idea – Forces and Interactions: All interactions can be described by multiple types of forces, but the relative impact of each type of force changes with scale. On the nanoscale, a range of electrical forces, with varying strengths, tech to dominate the interactions between objects. |
Butterflies And Bats Reveal Clues About Spread Of Infectious Disease
There's a most unusual gym in ecologist Sonia Altizer’s lab at the University of Georgia in Athens. The athletes are monarch butterflies, and their workouts are carefully monitored to determine how parasites impact their flight performance. With support from the National Science Foundation (NSF), Altizer and her team study how animal behavior, including long distance migration, affects the spread and evolution of infectious disease. In monarchs, the researchers study a protozoan parasite called Ophryocystis elektroscirrha, or “OE” for short. Up to two billion monarchs migrate every year to central Mexico, where Altizer and her colleagues capture, sample and release hundreds of butterflies each day during their field study. Their work is providing some details on the differences in how diseases spread in human and animal populations. Vampire bats may not have the beauty factor that monarch butterflies do, but the bats are important in Altizer’s study of how the spread of infectious diseases by animals is affected by human activities. In Peru, University of Georgia postdoctoral researcher Daniel Streicker focuses on these bats whose populations have exploded in recent years. Ranchers have introduced livestock into the Andes and the Amazon. More bloodthirsty bats might mean more rabies. Streicker and Altizer say that the results of this study will improve rabies control efforts in Latin America, where vampire bats cause most human and livestock cases.
Provided by the National Science Foundation
More Science Nation videos |
So your body runs on glucose. Glucose belongs to a class of chemicals called carbohydrates. And before we go any further we must look at what carbohydrates are and how they work.
Carbohydrates are very sensibly named – as their most basic structure is carbon + water. That is, their basic formula is CH2O, and every carbohydrate has this basic formula, but only in multiples of 6. So glucose is C6H12O6.
This 6 – carbon unit is the most basic structure possible, so it is referred to as a monosaccharide.
Table sugar (sucrose) is C12H24O12 and looks like this:
So sucrose is a disaccharide, and is composed of two monosaccharides.
And the sky’s the limit – saccharide units can be added together to infinity, like chemical lego blocks. An example of a polysaccharide is starch:
So a complex carbohydrate like starch is made up of many glucose units. Before your body gets fuel, therefore, it must break these complex molecules into its component glucose molecules.
But the real question is – how quickly does this happen? If the breakdown happens quickly, then your blood is flooded with glucose. This can cause a serious medical condition called hyperglycemia, so your body removes the excess glucose by releasing insulin into your blood from your pancreas.
This has two long-term effects – firstly it causes you to put on weight, as the excess sugar is converted to fat. But it can also cause you to become a diabetic, as your pancreas eventually gets overloaded and just gives up – this is why fat people are often also diabetic.
The rate at which a food releases glucose is referred to as its glycemic index (GI), and is the single most important factor in determining whether the food is fattening or not.
Now, this results in some weird outcomes. For example, look on the label for Nutella and you’ll see it’s loaded with sugar and fat – but it’s low in GI. And the reason is simply that the fat slows down the rate at which the sugar breaks down.
And this is a pattern – often the fibre in something slows down the sugar absorption rate, so you are far better, for example, eating whole fruits rather than fruit juices.
For further reading, have a look at Eat Yourself Slim, which explains all this in fine detail |
(Editor’s Note: This story was originally published April 14, 2010)
Researchers at the Massachusetts Institute of Technology have discovered a process that allows them to imitate photosynthesis—a potentially critical breakthrough in the search for clean, sustainable energy.
Photosynthesis is the ability of plants to harvest the power of sunlight. It is a natural process that converts carbon dioxide into oxygen, as well as other organic compounds, especially sugars, using the energy from sunlight. Photosynthesis occurs in plants, algae, and many species of bacteria. By replicating the process, solar energy proponents hope to make unlimited amounts of “green” energy from water and sunlight alone.
The breakthrough was announced by Angela Belcher, the Germeshausen Professor of Materials Science and Engineering and Biological Engineering at MIT.
Writing in the current issue of Nature Nantechnology, Belcher said, “Our results suggest that the biotemplated nanoscale assembly of functional components is a promising route to significantly improved photocatalytic water-splitting systems.”
In practical terms, the research could allow an inexpensive way to split water into hydrogen and oxygen. The hydrogen could then be used as a fuel source for vehicles or fuel cells. For dreamers, in means that you could store water in your car (or home, or wherever) and simply synthesize it into hydrogen and oxygen on the fly.
Belcher and her team took a harmless virus called M13. They engineered it so that one end carries a catalyst—iridium oxide. Bound at the other end are light-sensitive pigments, zinc porphyrins. The porphyrins capture light energy, and transmit it along the virus, acting as a wire, to the other end, activating the catalyst. That process splits water into oxygen and the constituents of hydrogen, a proton and electron.
“The role of the pigments is to act as an antenna to capture the light. and then transfer the energy down the length of the virus, like a wire,” Belcher said in her paper. “The virus is a very efficient harvester of light, with these porphyrins attached.”
For now, a prototype device that can carry out the splitting of water into oxygen and hydrogen should be able to be ready in two years, according to Belcher.
And there’s another problem. For now, the process extracts the oxygen just fine, but the hydrogen atoms get split into their component protons and electrons. In the second phase of the project, Belcher and her team will combine these hydrogen atom components back into proper atoms and molecules. They also need to find a cheaper catalyst.
According to Thomas Mallouk, the DuPont Professor of Materials Chemistry and Physics at Pennsylvania State University, for this process to actually be cost-competitive with other approaches to solar power, it has to be at least ten times more efficient than natural photosynthesis, be repeatable a billion times, and use less expensive materials.
But for now, the really hard part seems to be over.
VIDEO: Photosynthesis (Simple Science via Vimeo.com)
GM viruses offer hope of future where energy is unlimited
Breakthrough as US researchers replicate photosynthesis in laboratory
The Independent (London) April 13, 2010
Biologically templated photocatalytic nanostructures for sustained light-driven water oxidation
Nature Nanotechnology, April 11, 2010
Engineered Virus Harnesses Light To Split Water
Scientific American, April 14, 2010
Virus to help split water into hydrogen for fuel cells?
PaulTan.org, April 14, 2010 |
Mapping the changing forests of Africa
A new biomass map of Africa will help answer a complex question: what are the global and local effects of land-use change in African forests?
By Stephanie Renfrow
In the Central African Bwindi forest in Uganda, a gorilla sits on the forest floor nursing her young. A few miles away, a subsistence farmer burns a patch of forest in preparation for a crop that will feed his family. And as the smoke from the burning forest floats into the sky, carbon dioxide (CO2) drifts into the Earth's atmosphere.
The gorilla, the farmer, and the burning forest's emissions are interconnected by a single phenomenon: a change in the way people use land. More than 900 million people live in Africa, and many of them rely on traditional slash-and-burn agriculture to survive lives of profound poverty. Slash-and-burn fires in developing countries contribute a significant amount of CO2 to the atmosphere; up to a third of all global CO2 emissions comes from land-use changes, including agricultural fires. Carbon dioxide is one of the greenhouse gases that is causing our planet's average surface temperatures to rise.
But land-use change does more than just add to our world's growing burden of CO2. Land-use change also affects and threatens entire ecosystems and the plants and animals within them. In the case of the Central African forests, land-use change has contributed to pushing three species of Great Ape to the edge of extinction. Sadly, the very people who burn the forests to survive can deepen their own plight if they run out of the vital fuel and resources the forests provide. Land-use change and its global and local effects are interrelated from the point of view of Nadine Laporte, a scientist at the Woods Hole Research Center in Woods Hole, Massachusetts. Laporte is the director of the Africa Program, which studies African land-use planning and forest management. For Laporte, finding a way to address rising CO2 and dwindling Great Apes populations, as well as helping to improve forest management, are central to her day-to-day work. Land-use change and its global and local effects are interrelated from the point of view of Nadine Laporte, a scientist at the Woods Hole Research Center in Woods Hole, Massachusetts. Laporte is the director of the Africa Program, which studies African land-use planning and forest management. For Laporte, finding a way to address rising CO2 and dwindling Great Apes populations, as well as helping to improve forest management, are central to her day-to-day work.A biomass map of Africa
In the spring of 2005, Laporte accepted an invitation to visit Uganda's National Forest Authority. While there, she learned that the Ugandans were trying to use maps of the forest's biomass—trees, plants, and other living matter—to help them manage the land use of their forests. "By law, the government of Uganda must come up with an estimate of the forest biomass every five years," Laporte said. On-the-ground field surveys are expensive and time-consuming, so the Ugandans had turned to satellite imagery, also called remote sensing data, as a logical solution for creating a map. "But the last time they produced a map," Laporte said, "they used high-resolution satellite imagery and it took them almost ten years to produce." Laporte thought she knew why.
The satellite imagery the Ugandans used is called Satellite Pour l'Observation de la Terra (SPOT) data. The SPOT sensor collects data at a resolution of two-and-a-half to twenty meters, and its image footprint covers areas measuring sixty kilometers by eighty kilometers (thirty-seven miles by fifty miles). "It's like covering the whole country with hundreds of little tiles that you have to put together," Laporte said. "It's very time consuming to do that, especially with limited resources."
Laporte knew of another option: data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor, flying aboard NASA's Terra satellite. MODIS data are distributed by NASA's Land Processes Distributed Active Archive Center (LP DAAC) at the United States Geological Survey's Earth Resources Observation and Science (EROS) Data Center in Sioux Falls, South Dakota. The data have a lower resolution than SPOT data; the specific data set that Laporte used has a resolution of one kilometer (six tenths of a mile). However, MODIS data are organized in large tiles that cover entire regions, so MODIS data were a more manageable choice for the project. The Ugandans welcomed Laporte's help and expertise. Laporte said, "For all of Uganda, we only had to process two giant tiles from MODIS instead of hundreds of tiny tiles from SPOT."
Laporte realized that the biomass map that the Ugandan government needed for forest management was actually one piece of a much larger puzzle. "The same remote sensing data sets can have different applications according to the question you want to answer," she said. She decided to propose an expanded version of the Ugandan biomass-mapping project to NASA through its Land-Cover Land-Use Change and Biodiversity and Ecological Forecasting programs. Laporte knew that the biomass project would be more useful if it extended to all of Africa rather than just focusing on Uganda. She also recognized that the biomass map of Africa could be used to seamlessly address multiple needs at the same time: quantifying CO2 emissions, helping conserve the Great Apes, and improving forest management.
Quantifying carbon dioxide emissions
When a farmer burns a patch of forest, carbon stored in the trees and forest biomass returns to the atmosphere as CO2; this exchange of carbon between the Earth and the atmosphere is called the carbon cycle. By changing its land use, the forest patch goes from being a "carbon sink," which stores carbon, to being a "carbon source," which gives off CO2. Different countries produce different amounts of CO2, depending on various factors, including the amount of forest that is burned within their borders. However, the amount of carbon released into the atmosphere also varies depending on the type of forest burned. Evaluating the forest types of Central Africa is one of the main goals of the Africa biomass map project.
Alessandro Baccini, a remote-sensing scientist at Boston University who is collaborating with Laporte on the biomass map, explained the reason that different forests store differing amounts of carbon. "If you log a young forest, the amount of carbon released may not be very big," he said. "But if you log a mature forest, the amount of carbon released is probably much higher because the trees are taller and have a larger diameter." So, if scientists assume one average biomass amount for an entire country or region, their carbon estimates probably will not be accurate.
Laporte said, "That is why it's important to know the type of forest. Using MODIS imagery, we can determine the biomass of particular forests; then we know how much CO2 is released when those forests are burned." However, to effectively use satellite imagery to estimate the biomass that a forest represents, scientists first need some sample field data to check against. Baccini said, "We need to know how much biomass we have on the ground in a particular spot. Then, using this field data, we can calibrate the relationship between actual biomass and the remote-sensing data."
Richard "Skee" Houghton, a carbon modeler at the Woods Hole Research Center and collaborator on the project, agreed. "We're going to try to link the land-use change finely enough in space, at a specific location, to attach a biomass to it. That's what makes this map so much more difficult to produce, and it ought to be that much more accurate, too."
Once Laporte and Baccini have produced the biomass map and checked its accuracy, Houghton will use the information to model, or predict, the sources and sinks of carbon in Africa's forests. "The model is based on the processes of disturbance and recovery," Houghton said. "It tells us that if you cut down a particular area of rainforest, here's how much carbon was held in the rainforest, here's how much carbon would be held in a field of shifting cultivation, and here's how much carbon was released to the atmosphere when the forest was cleared." The model will be comprehensive enough to account for different land-use changes, including whether the forest is cut down for building materials, burned and farmed, or even replanted.
Using the completed model, Laporte will begin the analysis in earnest. "Starting with Central Africa, we'll be able to break down the contribution of CO2 by region and country." This information will help scientists understand the sources and sinks of carbon, as well as the role of land management in controlling carbon emissions from land-use change.
This map shows a detail of the Moderate Resolution Imaging Spectroradiometer (MODIS) data set that Laporte's team used to map biomass in Central Africa's Albertine Rift Zone. The dark green indicates higher biomass, light green indicates lower biomass, and lavender indicates savannas. The red rectangle highlights the Budongo Forest Reserve in southwest Uganda, where many of the surviving Great Apes are protected. (Courtesy Nadine Laporte, Alessandro Baccini).
Helping conserve the Great Apes
People are not the only ones affected by land-use change in Africa. Three of the four species of Great Apes—gorillas, chimpanzees, and bonobos—dwell in the forests of Central Africa and rely on the forests for their survival. They are under grave pressure because their habitat is diminishing as increasing numbers of people burn or cut down the forests. Laporte said, "In Uganda, only two small pockets of mountain gorilla habitat remain in vast areas converted for agriculture." According to House testimony from Marshall Jones, Deputy Director of the United States Fish and Wildlife Service, gorilla, chimpanzee, and bonobo populations have been reduced by half since 1983.Central African Great Apes live in forests and occasionally in bordering woodland or savanna areas. Their main staples are forest plants and fruit. Gorillas are the largest of the Great Apes; males may be 180 kilograms (400 pounds) and almost 2 meters (6 feet) tall when standing upright. Maintaining this large bulk requires extensive vegetation in which to forage. Chimpanzees and bonobos, although much smaller than gorillas, also require substantial and largely undisturbed tracts of forest for their survival. However, their forest habitat is rapidly vanishing, which is isolating populations and reducing their ability to survive.
The biomass map of Africa will be of great benefit to conservationists who are working to save the Great Apes from extinction. The Integrated Forest Monitoring System for Central Africa (INFORMS) project, which Laporte created in 2000, established the use of remote sensing to monitor Great Apes habitat change. Laporte said, "The Africa biomass map project could be seen as a natural continuation of INFORMS, but more oriented towards better information on the stock and volume of biomass throughout Africa."
The application of the biomass map to Great Apes conservation is clear. "The Great Apes are found in high biomass forests. Forests that have been degraded have lower biomass and are less likely to be good habitat for these animals," Laporte said. "Using the biomass map of Africa, we can predict potential habitat or prime habitat for them."
The biomass map will also help conservationists pool their efforts to save the remaining habitat that is most suitable for the diminishing populations of Great Apes. "The biomass map can be used as a layer of information as people who are monitoring the apes decide which areas of forest to focus on for protection," she said.
Improving forest management
Conservation of the forest for the benefit of Great Apes is closely tied to forest management. Forest managers must balance the needs of other species with those of our own. This means monitoring deforestation, carefully planning reforestation, and providing incentives for wise use of the forests.Millions of people rely on Central African forests to survive lives of extreme poverty. Each year, individual farmers burn significant areas of forest to plant food crops for their families. In subsequent years, subsistence farmers may allow land cleared previously to regrow into savannas, woodlands, and forest. However, overall, more forest is being destroyed than is being created.
The biomass map of Africa that Laporte and her team are working to produce will be valuable to forest managers in African countries, including Uganda. "Most of Uganda has been converted to agriculture, and they don't have much forest left," Laporte said. "Most of the logs that they use are imported from outside the country." In a country with few and dwindling forest resources, forest managers want accurate, current information that will help them determine how to manage the forest and where to focus planting efforts. Government officials could also use the information to determine where to provide social assistance or incentives for sustainable use of the forest.
The biomass map of Africa has already pinpointed some interesting information for forest managers. "We've been catching areas that used to be savannas with few trees and which are now like woodlands, with higher biomass," Laporte said. "We think that might be associated with regrowth." Comparing future versions of the map would help forest managers monitor the success of forest conservation and forest restoration projects.
The possibilities for applying the biomass maps to future forest management efforts are extensive. "One future direction for the work is that I would like to develop the link between biomass and poverty," Laporte said. "If we find low biomass in an area that we would predict should have high biomass, we can predict fuel wood scarcity. And the poorer you are, the more you depend on natural resources like forests for land, food, and fuel." The indication of a high-risk, high-poverty area could help identify places that need rapid, focused attention by forest managers, national governments, and international aid groups.
"Once the forest is gone," Laporte said, "starvation and increased poverty follow."
Mapping Africa into the future
Given the urgency for forest managers and conservationists, the more frequently Laporte's team can update the biomass map, the more relevant and helpful it will be.
"In June, I'll be in Uganda to do some validation work on the map," Laporte said. "We're hoping that maybe in a year or so, we'll be able to produce these biomass maps on a routine basis for the Uganda National Forest Authority." The current version of the biomass map took the team six months to produce, with calibration and polishing still left to do; they hope to finish the map sometime this year. Laporte hopes her team will eventually be able to provide maps annually to conservationists and forest managers.Once the team has used the polished map and deforestation rates to develop the carbon model, the focus will be on understanding specific areas of the carbon story. "We hope to get a better estimate of CO2, establish an annual rate of deforestation, and calculate how much CO2 is going into the atmosphere," Laporte said. "We'd even like to predict CO2 on an annual basis."
These details will be of interest as scientists compare carbon sources and sinks within and among regions and continents to better understand global warming. Plus, Baccini said, "If the technique works well in Africa, maybe we will expand it and have something more extended to cover other continents."
A better understanding of carbon's sources and sinks could also be useful as governments attempt to sort out policies to curb global carbon emissions. Houghton said, "In a world of carbon credits, you can imagine that countries will get credits or debits for sources and sinks of carbon that result directly from their land management practices." Being able to quantify a nation's carbon cycle would be an important part of that system.
Whether it's a government official, an international conservationist, or a forest manager, Laporte said, "I want to identify what the needs are—what people need so they can do a better job. And after we produce a finished product, then we will know that it has filled a real need." The MODIS biomass map of Africa is already on its way to addressing not just one need, but three: improving our understanding of the carbon cycle, adding to the knowledge-base of Great Apes conservationists, and providing a tool to help forest managers use their nations' resources wisely.
Sever, M. Tracking Gorilla Habitat Changes. Geotimes. Accessed June 14, 2006.
Frequently Asked Questions about the Science of Climate Change. Meteorological Service of Canada. Accessed June 14, 2006.
Testimony of Marshall P. Jones, Deputy Director, United State Fish and Wildlife Service. U.S. Fish and Wildlife Service. Accessed June 14, 2006.
For more information
NASA Land Processes Distributed Active Archive Center (LP DAAC)
|About the remote sensing data used|
|Sensor||Moderate Resolution Imaging Spectroradiometer (MODIS)
|Data sets||Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance
|DAAC||NASA Land Processes Distributed Active Archive Center (LP DAAC)|
Last Updated: May 14, 2019 at 9:10 AM EDT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.