content
stringlengths 275
370k
|
---|
Measuring body composition using bioelectrical impedance is simple and easy. Electrodes pass a very weak electrical current through the body (you cannot feel it) to test the resistance to current flow. Electricity moves faster through muscle and bone than it does through fat. The device measures this difference to calculate how much fat you have on your body.
Types of Measurement Devices
Handheld (upper-body)-Enter you weight and height and hold the device at arms length while gripping both hand grips. These models will give a good percentage body composition. However, it can only measure body fatness in the upper body because it dosn’t have electrodes for the feet. Electricity flows up one arm, through the chest, and down the other arm. It misses the entire lower body, therefore, it probably won’t give an accurate measurement of your fat loss around thighs, hips, abdomen, and lower body.
Lower Body- This is basically a scale with electrodes for each foot. It works well, and it is very simple to use. With this type, electricity passes up one leg, through the abdomen, and down the other leg to the second electrode. This model could slightly overestimate body fat percent because most people have more fat around the hips, abdomen, and thighs.
Full Body Composition- The most accurate type of bioelectrical impedance testing device available. It has electrodes for the hands and feet making the electrical current pass through the entire body. This is how bioelectrical impedance testing is done in a medical lab setting. With this type you can accurately measure changes in body fat throughout your entire body as you progress through a fitness program.
Other Options for Measuring Body Fatness
Skinfolds- Requires a skinfold caliper; pinch the skin in several sites on the body and use the caliper to measure the thickness. Use the Jackson Polluck equations to estimate percent body fat based on the thickness of the skinfolds. These calipers are highly accurate, and give a reliable clinically accepted measure.
Body Mass Index (BMI)- The Body Mass Index does not measure percent body fat, but it is a ratio of weight to height. BMI= (weight(kg)/heigh(in meters) squared) It is used to classify a person as being underweight, normal, overweight, or obese. However, since it cannot measure body composition (percent fat) it is no the perfect measure.
Body-Girth Measurements- Use a tape measure to determine the circumference of your body at different points and plug those numbers into equations to estimate body fatness. This also does not measure body composition, therefore it is a loose guide. The waist-to-hop ratio compares the circumference of your waist to the circumference of your hips. This number can be used to predict disease risk because internal body(inside the body cavity) fat is more dangerous than fat that is right under the skin.
One of the most popular bioelectrical impedance measuring devices, this handheld system is easy to use and can provide a decently accurate measure of your percent body fat. It has several setting you can choose from depending on whether you are an athlete or not.
This system measures fatness throughout your entire body. It is designed to pass electrical current through your hands and feet. This allows it to measure body composition throughout a large portion of your body.
This device has a high weight capacity, and it measure your percent body fat and weight in one compact device. |
World’s thinnest material used to create world's smallest transistor
By Darren Quick
April 20, 2008
April 21, 2008 In recent decades, manufacturers have crammed more and more components onto integrated circuits, roughly keeping pace with Moore’s Law. But for this to continue the semiconductor industry must overcome the poor stability of materials if shaped in elements smaller than 10 nanometres in size. At this spatial scale, all semiconductors, including silicon, oxidise, decompose and uncontrollably migrate along surfaces like water droplets on a hot plate. Now researchers at the University of Manchester, reporting their peer-reviewed findings in the latest issue of Science, have shown that it is possible to carve out nanometre-scale transistors from a single graphene crystal. Unlike all other known materials, graphene remains highly stable and conductive even when it is cut into devices one nanometre wide.
Graphene is a one-atom-thick gauze of carbon atoms resembling chicken wire discovered in 2004 by Professor Andre Geim and Dr Kostya Novoselov from The School of Physics and Astronomy at The University of Manchester. It is the first known one-atom-thick material which can be viewed as a plane of atoms pulled out from graphite. Graphene has rapidly become a hot topic in physics and materials science with Graphene transistors starting to show advantages and good performance at sizes below 10 nanometres - the miniaturization limit at which the silicon technology is predicted to fail. Their findings show that graphene can be carved into tiny electronic circuits with individual transistors having a size not much larger than that of a molecule and the smaller the size of their transistors the better they perform, say the Manchester researchers.
"Previously, researchers tried to use large molecules as individual transistors to create a new kind of electronic circuits. It is like a bit of chemistry added to computer engineering", says the University’s Dr Kostya Novoselov. "Now one can think of designer molecules acting as transistors connected into designer computer architecture on the basis of the same material (graphene), and use the same fabrication approach that is currently used by semiconductor industry".
"It is too early to promise graphene supercomputers," adds Geim. "In our work, we relied on chance when making such small transistors. Unfortunately, no existing technology allows the cutting materials with true nanometre precision. But this is exactly the same challenge that all post-silicon electronics has to face. At least we now have a material that can meet such a challenge."
Graphene not only promises to extend the life of Moore’s Law, the researchers have also found that the world’s thinnest material absorbs a well-defined fraction of visible light. This promises insights into the "fine structure constant", which defines the interaction between very fast moving electrical charges and light – or electromagnetic waves and like the speed of light, is one of the fundamental constants underlying the very nature of existence.
Just enter your friends and your email address into the form below
For multiple addresses, separate each with a comma |
The yellow arrow in the picture identifies the position of the black hole transient inside the galaxy Centaurus A, which is 12 million light-years away from Earth. The location of the object is coincident with gigantic dust lanes that obscure visible and X-ray light from large regions of Centaurus A.
A relatively "ordinary" black hole has been discovered in a distant galaxy, in what is the first time that a low-mass black hole has been found so far beyond our own Milky Way, scientists say.
An international team of researchers detected a so-called "normal-size" black hole in the distant galaxy Centaurus A, which is located about 12 million light-years away from Earth. By observing the black hole's X-ray emissions as it gobbles material from its surrounding environment, the scientists determined that it is a low-mass black hole, one likely in the final stages of an outburst and locked in a binary system with another star.
The object is typical of similar black holes inside our Milky Way galaxy, but the researchers' observations suggest that this is the first time that a normal-size black hole has been detected so far beyond the vicinity of our home galaxy. Its discovery gives astronomers the chance to characterize the population of black holes in other galaxies, they said.
"So far we've struggled to find many ordinary black holes in other galaxies, even though we know they are there," Mark Burke, a graduate student at the University of Birmingham in the U.K., said in a statement. "To confirm (or refute) our understanding of the evolution of stars we need to search for these objects, despite the difficulty of detecting them at large distances." [Photos: Black Holes of the Universe]
Burke is presenting the discovery at the U.K.-Germany National Astronomy Meeting, which is being held this week in Manchester, England. He was part of an international team of astronomers led by Ralph Kraft of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass.
The researchers used NASA's orbiting Chandra X-ray Observatory to make six 100,000-second long exposures of the galaxy Centaurus A.
Their observations unearthed an object with 50,000 times the X-ray brightness of our sun, but a month later, it had dimmed by more than a factor of 10. Later, it had dimmed even more, by a factor of more than 100 to become undetectable, the researchers explained.
This type of behavior is typical of similar black holes in the Milky Way, but also indicates that the object is likely in the final stages of an outburst.
Black holes can't be seen, but astronomers can observe their surrounding activity to find them. The lowest-mass black holes are formed when very massive stars reach the end of their lives and eject most of their material into space in a violent supernova explosion. This blast leaves behind a compact core that collapses into a black hole.
Scientists estimate that there may be millions of these low-mass black holes distributed throughout every galaxy, but they are hard to detect because they do not emit light. Astronomers detect them by observing activity around them, such as when black holes drag in material, which is then heated and emits X-ray light.
Still, an overwhelming majority of black holes remain undetected. In recent years, researchers have made some progress in discovering ordinary black holes in binary systems. Scientists look for the X-ray emission produced when they siphon material from their companion stars.
So far, these detections have been relatively close in astronomical terms, either inside the Milky Way galaxy or in a cluster of nearby galaxies known as the Local Group. The low-mass black hole found by Burke and his colleagues opens up the opportunity for astronomers to better understand the black hole population of other galaxies.
The team now plans to examine more than 50 other bright X-ray sources within Centaurus A to either identify them as black holes or other exotic, luminous objects.
"If it turns out that black holes are either much rarer or much more common in other galaxies than in our own it would be a big challenge to some of the basic ideas that underpin astronomy," Burke said. |
Mercury is often found in fish. Developing fetuses and young children are more vulnerable to the effects of mercury, which may cause developmental delays. Pregnant women are advised to be selective about the type and amount of fish they eat during pregnancy. Fish that contain higher levels of mercury include shark (flake), ray, swordfish, barramundi, gemfish, orange roughy, ling and southern bluefin tuna.
Mercury is a naturally occurring element that is found in air, water and food. Most people are exposed to mercury via food. Fish take up mercury from streams and oceans as they feed. This mercury is in the more toxic, methylmercury form. It binds to a person's tissue proteins (such as muscle). Food processing, preparation and cooking techniques don’t significantly reduce the amount of mercury in fish.
Pregnant women - or, rather, their unborn babies - are at the greatest risk. Babies developing in the uterus (womb) seem to be most vulnerable to the effects of mercury on their nervous systems. The mercury may slow their development in the early years. Research is ongoing, but women should be selective about the kinds and amounts of fish they eat during pregnancy. Infants and young children should also be limited in the amount of fish with high levels of mercury that they eat.
Methylmercury is the most hazardous
Mercury is common in the environment and has three forms: organic, inorganic and metallic. The organic form of mercury, particularly methylmercury, is the most dangerous.
Fish absorb methylmercury
Methylmercury in fish mainly comes from mercury in ocean sediment that is transformed into methylmercury by microorganisms. This organic form of mercury is absorbed by the tissues of fish through their gills as they swim and through their digestive tracts as they feed.
Some fish contain more mercury than others
Mercury levels differ from one species of fish to the next. This is due to factors such as the type of fish, size, location, habitat, diet and age. Fish that are predatory (eat other fish) are large and at the top of the food chain, and so tend to contain more mercury.
Fish that contain higher levels of mercury include:
- Orange roughy
- Southern bluefin tuna.
Fish with lower mercury levels
Examples of fish that contain lower levels of mercury include:
- Shellfish including prawns, lobsters and oysters
- Canned tuna.
Fish as part of the diet
Fish is an important part of a healthy diet. Some of the health benefits of fish include that it is:
- High in protein
- Low in saturated fat
- High in unsaturated fat
- High in omega-3 oils.
Mercury and the unborn baby
Unborn babies are at increased risk from mercury. The mercury in fish can lead to raised mercury levels in the mother. This mercury can be passed on through the placenta to her developing baby.
The fetus appears to be most sensitive to the effects of mercury during the third and fourth months of a pregnancy. The effects on the brain and nervous system may not be noticed until developmental milestones - such as walking and talking - are delayed. Memory, language and attention span may also be affected.
International researchers recommend reducing safe levels of mercury
Studies of the brain development of children whose mothers ate significant amounts of fish with high mercury levels during pregnancy have been carried out in New Zealand, the Faroe Islands and the Seychelles.
The Joint Food and Agriculture Organization (FAO) and World Health Organization (WHO) Expert Committee on Food Additives (JECFA) reviewed these studies in June 2003. These researchers recommended reducing the amount of fish known to contain mercury in the diet, particularly for pregnant women.
Australian research shows that mercury levels in some fish, particularly shark, could be even higher than in the areas studied for this research. In fact, it seems that mercury levels in some shark species caught in Victorian waters are particularly high.
Australian guidelines for safe levels of mercury in the diet
The Australian guidelines for safe levels of mercury in the diet were revised in 2004 by Food Standards Australia New Zealand (FSANZ). Advice on the consumption of fish was updated to reflect the Joint FAO and WHO Expert Committee (JECFA) research. Advice was extended to cover pregnant women, women intending to become pregnant within the next six months, young children and the general population.
Pregnant women should limit the amount of fish they eat
It is suggested that pregnant women eat 2–3 serves of fish every week for the good health of themselves and their developing baby. However, pregnant women or women intending to become pregnant within the next six months should be careful about which fish they eat. Some types of fish contain high levels of mercury, which can be harmful to the developing fetus.
Pregnant women should:
- Limit to one serve (150g) per fortnight – billfish (swordfish, broadbill and marlin) and shark (flake), with no other fish eaten in that fortnight.
- Limit to one serve (150g) per week – orange roughy (deep sea perch) or catfish, with no other fish eaten that week.
- Eat 2–3 serves per week – of any other fish or seafood (for example, salmon or tuna).
Women should not be worried if they’ve had the odd meal of fish with high levels of mercury. It is only a potential problem when that type of fish is eaten regularly, which causes a build-up of mercury in the mother’s blood.
Mercury and breastfeeding
Methylmercury from fish eaten by women during pregnancy seems to only pose a health threat to the baby while it is in the womb. Once the baby is born, the levels of mercury in the mother’s milk are not high enough to be a risk to the infant.
Infants and children
Parents and carers of infants and young children (up to six years of age) are advised to:
- Limit their child to one serve (75g) per fortnight – billfish (swordfish, broadbill and marlin) and shark (flake), with no other fish eaten in that fortnight.
- Limit their child to one serve (75g) per week – orange roughy (deep sea perch) or catfish, with no other fish eaten that week.
- Encourage their child to eat 2–3 serves per week – of any other fish or seafood (for example, salmon or tuna).
Our body does clear out the mercury
It’s important to remember that the body can and does get rid of mercury over time. So people only go over the safe levels if they eat a lot of high mercury-containing fish regularly over many months.
Where to get help
- Your doctor
- Dietitians Association of Australia Tel. 1800 812 942
- For further information contact Food Standards Australia New Zealand on Tel. (02) 6271 2222
Things to remember
- Fish that contain high levels of mercury include shark, orange roughy, swordfish and ling.
- Mercury is a naturally occurring element that is found in air, water and food.
- The unborn baby is most sensitive to the effects of mercury, particularly during the third and fourth months of gestation.
- Pregnant women, women planning a pregnancy and young children (up to six years) should avoid consumption of fish that contain high levels of mercury.
You might also be interested in:
Want to know more?
Go to More information for support groups, related links and references.
This page has been produced in consultation with and approved by:
(Logo links to further information)
Department of Health logo
Fact sheet currently being reviewed.
Last reviewed: January 2013
Content on this website is provided for education and information purposes only. Information about a therapy, service, product or treatment does not imply endorsement and is not intended to replace advice from your doctor or other registered health professional. Content has been prepared for Victorian residents and wider Australian audiences, and was accurate at the time of publication. Readers should note that, over time, currency and completeness of the information may change. All users are urged to always seek advice from a registered health care professional for diagnosis and answers to their medical questions.
Content on this website is provided for education and information purposes only. Information about a therapy, service, product or treatment does not imply endorsement and is not intended to replace advice from your qualified health professional. Content has been prepared for Victorian residence and wider Australian audiences, and was accurate at the time of publication. Readers should note that over time currency and completeness of the information may change. All users are urged to always seek advice from a qualified health care professional for diagnosis and answers to their medical questions.
For the latest updates and more information, visit www.betterhealth.vic.gov.au
Copyight © 1999/2015 State of Victoria. Reproduced from the Better Health Channel (www.betterhealth.vic.gov.au) at no cost with permission of the Victorian Minister for Health. Unauthorised reproduction and other uses comprised in the copyright are prohibited without permission. |
Common Health Tests (cont.)
A very important role for screening is in detecting cancer at an early stage. Although screening does not prevent cancer, it may diagnose the condition when it is in the most treatable form. It is important to note that this is not always the case though.
- A good example of this is lung cancer. Lung cancer is the leading cause of cancer-related deaths in the United States. It may seem sensible that a screening chest
X-ray should be used to diagnose lung cancer. In general, though, a simple screening chest
X-ray does not diagnose the condition early enough to have any significant impact on survival.
Which cancers can be detected using common health tests? Several types of cancer can be screened for. These include cancers of the breast, cervix, testicles, colon and rectum, and skin. There does seem to be a definite impact on survival when cancer is detected early and treated appropriately.
- Some other cancers can be screened for, but it is not clear that identification results in an improvement in the long-term survival.
- The American Cancer Society offers a complete discussion of cancer screening and detection on its
web site (http://www.cancer.org/).
Must Read Articles Related to Common Health Tests
Cholesterol tests measure the cholesterol and triglyceride levels in the blood. The test is also referred to as a lipoprotein profile or lipoprotein analysis. T...learn more >>
High cholesterol levels can lead to heart disease, stroke, angina, blood clot formation, stroke, peripheral artery disease, and more. Causes of high cholesterol...learn more >>
Understanding Your Cholesterol Level
Understanding your cholesterol level is important for disease prevention. High cholesterol and triglyceride levels in the blood place an individual at a higher ...learn more >> |
On May 9 CO2 reached 400 ppm at the National Oceanic and Atmospheric Administration’s (NOAA) monitoring centre at Muana Loa and at at the Scripps Institution of Oceanography in San Diego, California. This is what’s been happening over the last 130 years in broad terms:
It seems many news organisations, for example the BBC, and some scientists are stressing that the last time concentrations were so high was 3 to 5 million years ago. In the linked article Michael Mann, director of the Earth System Science Center at Penn State gives a different view:
Mann said the last time scientists are confident that CO2 was sustained at the current levels was more than 10 million years ago, during the middle of the Miocene Period.
Back then, global temperatures were hotter, ice was sparse and sea level was dozens of meters higher than it is today.
This graph gives an idea of how the temperatures played out over the last 65 million years:
I don’t think you can directly compare what the sea level would have been then with now because the shape of the ocean basins were possibly significantly different. Also I understand the error margins in ascertaining what it would have been then are quite high.
Nevertheless it’s no comfort that back then there would have been no Greenland ice sheet (worth 6-7 m) and little West Antarctic ice sheet (5-7 m). The East Antarctic ice sheet would have been smaller (59 m in all) and most likely there were no land glaciers and ice caps elsewhere (0.5 m). Thermal expansion would have also seen sea levels higher. So 25 to 30 metres seems about right.
Actually Aradhna Tripati and colleagues had a look at this back in 2009. They put equivalent CO2 concentration levels back 15 million years, but their story on ice and sea levels is much as I’ve just described. They put temperatures 5 to 10F (roughly 3-5C) higher. They produced this graph comparing palaeontological CO2 levels with IPCC forecasts:
This graph from the Mauna Loa site shows how the concentrations have stepped up over time:
James Hansen’s Iowa testimony (p39) tells us that during the Cenozoic era CO2 levels changed at the rate of 100 ppm per million years, that’s 0.0001 ppm each year. Now we are doing it about 25,000 times faster.
As Michael Mann said:
“There is no precedent in Earth’s history for such an abrupt increase in greenhouse gas concentrations.”
Finally there are other greenhouse gases. CO2e or CO2 equivalent covers what is sometimes known as the ‘Kyoto six’ under the Kyoto Protocol. The main ones other than CO2 are methane and nitrous oxide. This image from the World Resources Institute gives the mix as of 2005:
Unfortunately I can’t find current levels or historical graphs, but on that basis CO2e would now be 519. That can’t be good!
The only way forward now is back: to retrace our steps and seek to return atmospheric concentrations to around 350ppm
George is his usual pessimist/realist self.
He sees our chances of preventing climate breakdown as close to zero.
So here stands our political class at a waystation along the road of idiocy, apparently determined only to complete the journey.
Update: Andrew Glikson has an interesting article at The Conversation. He’s an earth and paleo-climate scientist and as as such compares present CO2 levels with the Pliocene. With the temperature 2-4C higher and sea levels about 25m higher, it was a different place:
Life abounded during the Pliocene. But such conditions mean agriculture would hardly be possible. The tropical Pliocene had intense alternating downpours and heat waves. Regular river flow and temperate Mediterranean-type climates which allow extensive farming could hardly exist under those conditions.
I love that photo of earth from Voyager 1: |
Definition: The word Scan has several meanings as it relates to information security. One meaning relates to a command that causes anti-virus software installed on a computer or network to examine the files on a computer. Another definition of the word relates to a technique that crackers use to determine if any “ports” are open to an intrusion. Another variant of the word is used to describe what a tool like Netmap does to examine an entire network infrastructure. Simply, the ports on a computer are checked by another computer to learn if a connection can be made. The practice is known as port scanning.
Its Relevance: Part of an organization’s vulnerability assessment must be to examine all possible intrusion points. This functionality is included in most anti-virus software. Both automated and manual sweeps of the system can be structured. An organization’s official policies must include and consistently apply assessments of its system's vulnerabilities. |
Middle East: Palestine
- Grades: 3–5, 6–8
The word "Palestine" comes from "Philistine," the name for one of its early peoples. The Roman province in this region was known as Syria Palestina. Palestine's boundaries have varied widely over its long history. Although it once extended over a wider area, it is generally thought of today as the geographical region extending from the Sinai Peninsula on the south to Lebanon and Syria on the north and from the Mediterranean Sea on the west to the Jordan River and the Dead Sea on the east.
The article that follows provides a brief historical overview of Palestine.
Palestine has been inhabited since prehistoric times. In about 2000 B.C., the Hebrews, a nomadic people then living in Mesopotamia (modern Iraq), began their migration to the land of Canaan, as Palestine was then known. The twelve Hebrew tribes were united under their first king, Saul, to form the kingdom of Israel, and they eventually controlled most of the region. In about 1000 B.C., Saul's successor, David, made the city of Jerusalem his capital. Israel reached the height of its power under King Solomon, son of David, but after his death in 922 B.C., it was divided into two rival kingdoms—Israel in the north and Judah in the south. (The term "Jew," which originally applied only to a Hebrew of Judah, eventually came to be used in referring to any Hebrew.)
Weakened by internal quarrels, the two kingdoms fell prey to stronger neighbors. In the 700's B.C., Israel was conquered by the Assyrians and its people were dispersed. Judah survived until the 500's B.C., when it fell to the Babylonians and many of its inhabitants were forced into exile. The Babylonians were succeeded by the Persians, under whom the Jews were allowed to return, and the Persians by the Greek and Macedonian armies of Alexander the Great. After Alexander's death in 323 B.C., his followers founded kingdoms in Egypt and Syria, which ruled Palestine in turn. Attempts by the Seleucid rulers of Syria to introduce Greek religious practices into the region in 167 B.C. provoked the Jews to revolt. After a long struggle led by Judah Maccabee and his brothers, they re-established an independent Jewish kingdom, which lasted until 63 B.C.
From 63 B.C. to about A.D. 630, Palestine was part of first the Roman and then the Byzantine empires. During the reign of Herod the Great, who ruled Judea (the Roman name for Judah) as a Roman protectorate, Jesus was born in the town of Bethlehem. It marked the humble beginning of what would become Christianity and eventually spread throughout the Roman world.
Roman rule over Palestine led to repeated Jewish revolts. The first, from 66 to 73, resulted in the destruction of the Temple in Jerusalem, the center of Jewish worship, along with much of the city itself. A second revolt broke out in 115 and a third in 132. The last, led by Simon Bar Cocheba (or Bar Kokhba), while at first successful, was eventually put down with great harshness. An independent Jewish nation would not appear again in the region for more than 1800 years.
Shortly after 630, a powerful new force erupted out of the deserts of the Arabian Peninsula. The Arabs, united under the banner of Islam, the religion of the Muslims, swiftly conquered Palestine, along with most of the rest of the Middle East. Jews and Christians continued to live in Palestine, but Muslim Arabs became the dominant people of the region. Since Islam shared some of its traditions with the two earlier religions and Muslims believed that their prophet, Mohammed, had ascended to heaven from Jerusalem, Palestine became a holy land for them as well.
For most of the centuries that followed, Palestine remained under the rule of one or another Muslim dynasty. The exception was a period during the 1100's, when European Crusaders ruled Jerusalem and other small states in the region. In the 1500's, Palestine became part of the empire of the Ottoman Turks and remained so until the 1900's.
In the late 1800's and early 1900's, two nationalist movements—both involving Palestine—began to develop. The first was Zionism, which sought to re-establish a Jewish homeland in the region. From the 1880's on, considerable numbers of Jews from Europe settled in Palestine. A separate Arab nationalist movement, begun not long after, had as its aim independence from Ottoman rule. Arab nationalism was not focused specifically on Palestine, as Zionism was, but considered it part of the larger Arab community.
To this was added the role played by Britain, which sought the aid of both Arabs and Jews during World War I. In 1917, British forces occupied Palestine. That same year the British government issued the Balfour Declaration (named for Arthur J. Balfour, then the British foreign secretary). It pledged support for a Jewish homeland in Palestine while acknowledging the rights of its non-Jewish population. Arab leaders also claimed that Britain had promised to make Palestine part of an independent Arab state. These competing aspirations and claims set the stage for the clash between Jews and Arabs over the region that continues to the present day.
After World War I ended in 1918, the Ottoman Empire was broken apart, its core becoming the republic of Turkey. Palestine itself was placed under British administration in 1922 as a mandate of the League of Nations, the forerunner of the United Nations. The mandate also included land east of the Jordan River, where Britain established the state of Transjordan (now Jordan) in 1923.
Arab opposition to Zionist aims led to violent incidents in the 1920's. These grew worse during the 1930's, as increasing numbers of Jews, fleeing Nazi persecution in Europe, arrived in Palestine. Britain was increasingly hard-pressed to contain what had become an Arab rebellion. In 1939, just before the outbreak of World War II, the British government proposed the creation, within ten years, of an independent Palestine, composed of both Arabs and Jews but maintaining an Arab majority. Jewish immigration was to be limited and would end entirely within five years, unless approved by the Arabs. Both sides rejected the plan.
The murder of millions of European Jews by Nazi Germany during World War II intensified Zionist efforts to win a Palestinian homeland. After the war's end in 1945, the Jewish population swelled with the arrival of concentration camp survivors. Because of the immigration restrictions imposed by Britain, many of these refugees were smuggled into Palestine by Jewish underground groups, some of which used guerrilla warfare, sabotage, and terrorist tactics against the British forces.
In 1947 the British, unable to find a solution acceptable to both sides, turned the issue of Palestine over to the United Nations, which voted to partition the region into separate Jewish and Arab states. Jerusalem was to be an international city, administered by the United Nations. The Jews of Palestine and Zionists elsewhere generally accepted this decision. Palestinian Arabs and Arab nationalists almost universally opposed it.
As the British prepared to depart, the new Jewish state of Israel was proclaimed on May 14, 1948. It was invaded almost immediately by armies of neighboring Arab countries, beginning the first Arab-Israeli war. When the war ended in 1949, Israel had not only successfully defended itself but had won additional territory as well. During the fighting, Transjordan occupied the West Bank, the region west of the Jordan River (subsequently changing its name to Jordan), and Egypt took over the Gaza Strip. Both were areas that had been allotted to a Palestinian Arab state. The war left Jerusalem divided between Israel and Jordan. Large numbers of Palestinian Arabs fled from Israeli to Arab territory, particularly the West Bank and the Gaza Strip.
The question of a Palestinian Arab state has remained one of the main causes of hostility between Israel and the Arab countries. Three more Arab-Israeli wars followed—in 1956, 1967, and 1973. During the 1967 war, Israel gained control of the West Bank, all of Jerusalem, the Gaza Strip, and the Sinai peninsula of Egypt, as well as Syria's Golan Heights. Israel's occupation of the West Bank and the Gaza Strip brought large numbers of Palestinian Arabs under its control. Israel returned the Sinai following a peace treaty with Egypt concluded in 1979. The treaty also discussed, in general terms, the possibility of a Palestinian Arab state in the West Bank and Gaza, but there was no agreement on how the state would be set up and governed.
Palestinian Arab nationalism, meanwhile, found expression in militant guerrilla organizations. Most were included in an overall body, the Palestine Liberation Organization (PLO), headed by Yasir Arafat. The PLO was eventually accepted by Arab countries as the "sole legitimate representative of the Palestinian people" and was granted observer status by the United Nations. Although the diverse groups had different aims, most shared the goal of replacing Israel with a predominantly Arab state. Their activities included widespread terrorism, often carried out far from Palestine itself, and raids against Israeli settlements. Israeli opinion on the Palestinian question has been mixed. Some Israelis have been willing to trade territory for peace; others have opposed a Palestinian Arab state, fearing that it would mean the elimination of Israel.
In 1987, Arabs in the Gaza Strip launched an uprising against Israeli occupation that soon spread to the West Bank. In 1993, Israel and the PLO signed a historic agreement giving Arabs self-rule in the Gaza Strip and the West Bank city of Jericho. In 1994, Jordan signed a peace treaty with Israel. In 1996 the Arabs elected a self-rule Palestinian National Authority (PNA), headed by Arafat. By 2000, under a series of accords, the PNA controlled nearly 43 percent of the West Bank, an area containing about 60 percent of that region's Palestinian inhabitants. But the September 2000 deadline for a final accord was not met, chiefly because of disagreements over who would control Jerusalem. Palestinian and Israeli Arabs then launched a new uprising against Israel. As the death toll mounted, the focus shifted from achieving a lasting peace to preventing a war.
Coauthor, The Middle East: A Social Geography |
This illustration shows a butterfly's wing across ten orders of magnitude, from the butterfly to the atoms of which it is made. Using the conventions of visual perspective the image travels in one continuous “landscape” from the human scale at the top to the atomic scale in the foreground. Placing objects from the butterfly's wing in one frame clarifies connections between components, highlighting the system’s reliance on structures at very different scales. This version of the image is a poster with annotated text explaining the different objects in the image, but it is also available as a poster without annotations, a banner or a graphic file, and also appears on the "Everything is Made of Atoms" Poster with other parallel zooms into the human bloodstream and a computer chip.
- Objects are made of atoms Things are made of smaller and smaller parts.
- Nanometer-sized things are very small, and often behave differently than larger things do.
- Scientists and engineers have formed the interdisciplinary field of nanotechnology by investigating properties and manipulating matter at the nanoscale.
NISE Network products are developed through an iterative collaborative process that includes scientific review, peer review, and visitor evaluation in accordance with an inclusive audiences approach. Products are designed to be easily edited and adapted for different audiences under a Creative Commons Attribution Non-Commercial Share Alike license. To learn more, visit our Development Process page. |
September is National Childhood Obesity Awareness Month
Learn about ways you can promote healthy growth in children and fight childhood obesity.
About 1 of every 5 (17%) children in the United States has obesity and certain groups of children are more affected than others. While there is no single or simple solution, National Childhood Obesity Awareness Month provides an opportunity for learning about ways to prevent and address this serious health concern.
Childhood obesity is a major public health problem.
- Children who have obesity are more likely to have obesity as adults. This can lead to lifelong physical and mental health problems, including diabetes and increased risk of certain cancers.
- Children who have obesity face more bullying and stigma.
- Childhood obesity is influenced by many factors. For some children and families factors include too much time spent in sedentary activities such as television viewing; a lack of bedtime routine leading to too little sleep; a lack of community places to get adequate physical activity; easy access to inexpensive, high calorie snacks and beverages; or a lack of access to affordable, healthier foods.
Riding bicycles is a great activity to help children maintain a healthy weight.
Being physically active improves children’s overall health.
There are ways parents can help prevent obesity and support healthy growth in children.
- To help ensure that children have a healthy weight, energy balance is important. To achieve this balance, parents can make sure children get adequate sleep, follow recommendations on daily screen time, take part in regular physical activity, and eat the right amount of calories.
- Parents can substitute higher nutrient, lower calorie foods such as fruit and vegetables in place of foods with higher-calorie ingredients, such as added sugars and solid fats.
- Parents can ensure access to water as a no-calorie alternative to sugar-sweetened beverages.
- Parents can serve children fruit and vegetables at meals and as snacks and model this behavior themselves.
Addressing obesity can start in the home, but also requires the support of communities.
- We can all take part in the effort to encourage more children to be physically active and eat a healthy diet.
- The federal government is currently helping low-income families get affordable, nutritious foods through programs, such as the Supplemental Nutrition Program for Women, Infants, and Children (WIC) and the Child and Adult Care Feeding Program (CACFP).
- State and local stakeholders including health departments, businesses, and community groups can help make it easier for families with children to find low-cost physical activity opportunities and buy healthy, affordable foods in their neighborhoods and community settings.
- Schools can help students be healthy by putting into action policies and practices that support healthy eating, regular physical activity, and by providing opportunities for students to learn about and practice these behaviors.
- With more than 60% of US children younger than age 6 participating in some form of child care on a weekly basis, parents can engage with child care providers to support healthy habits at home and in child care settings.
Working together, states, communities, schools, child care providers, and parents can help make healthier food, beverages, and physical activity the easy choice for children and adolescents to help prevent childhood obesity.
- CDC's Childhood Overweight and Obesity
- Vital Signs—Progress on Childhood Obesity
- CDC Nutrition for Everyone: Fruits and Vegetables
- CDC Physical Activity Guidelines for Americans
- BMI Calculator for Child and Teen
- Adolescent and School Health
- Water Access in Schools
- Let's Move! Child Care
- Let's Move! Salad Bars to Schools
- Page last reviewed: September 9, 2014
- Page last updated: September 9, 2014
- Content source:
- National Center for Chronic Disease Prevention and Health Promotion
- Page maintained by: Office of the Associate Director for Communication, Digital Media Branch, Division of Public Affairs |
Children and Anthrax: A Fact Sheet for Clinicians Anthrax is an acute infectious disease caused by the bacterium Bacillus anthracis. Children, like adults, may be affected by three clinical forms: cutaneous, inhalational, or gastrointestinal. The symptoms and signs of anthrax infection in children older than 2 months of age are similar to those in adults. The clinical presentation of anthrax in young infants is not well defined. When children become ill and present for treatment, making a diagnosis may be more difficult than in adults because young children have difficulty reporting what has happened to them or telling a doctor exactly how they feel. Because respiratory illnesses are much more common in children than adults, the examining clinician should have an understanding of disease manifestations in children.
Explaining Anthrax to Children Because of extensive media coverage, parents across the country might soon find themselves faced with some difficult questions from their children about bioterrorism, anthrax, and other infections like smallpox that could be turned into biological weapons. Here is some information to help you deal with your child's concerns. As always, try to provide direct, honest answers to your child's questions that are appropriate for her level of understanding. Be careful not to overwhelm her with too much information all at once, and don't be afraid to admit it if you don't have all the answers. |
- • What is capacitance?
- • Dielectric.
- • permittivity.
- • Dielectric strength and maximum working voltage.
- • Calculating the charge on a capacitor.
The amount of energy a capacitor can store depends on the value or CAPACITANCE of the capacitor. Capacitance (symbol C) is measured in the basic unit of the FARAD (symbol F). One Farad is the amount of capacitance that can store 1 Coulomb (6.24 x 1018 electrons) when it is charged to a voltage of 1 volt. The Farad is much too large a unit for use in electronics however, so the following sub-units of capacitance are more useful.
|Sub unit||Abbreviation||Standard notation|
|micro Farads||µF||x 10-6|
|nano Farads||nF||x 10-9|
|pico Farads||pF||x 10-12|
Remember however, that when working out problems involving capacitance, the formulae, the values used must be in the basic units of Farads, Volts etc. Therefore when entering a value of 0.47nF for example, into a formula (or your calculator) it should be entered in Farads using the Engineering Notation version of Standard Form as: 0.47 x 10-9 (Download our Maths Tips booklet for more information).
Capacitance depends on four things;
1.The area of the plates
2.The distance between the plates
3.The type of dielectric material
Of these four, temperature has the least effect in most capacitors.The value of most capacitors is fairly stable over a "normal" range of temperatures.
Capacitor values may be fixed or variable. Most variable capacitors have a very small value a few tens or hundreds of pF) The value is varied by either:
- •Changing the area of the plates.
- •Changing the thickness of the dielectric.
Capacitance (C) is DIRECTLY PROPORTIONAL TO THE AREA OF THE TWO PLATES that directly overlap, the greater the overlapping area, the greater the capacitance.
Capacitance is INVERSELY PROPORTIONAL TO THE DISTANCE BETWEEN THE PLATES. i.e. if the plates move apart, the capacitance reduces.
The electrons on one plate of the capacitor affect the electrons on the other plate by causing the orbits of the electrons within the dielectric material (the insulating layer between the plates) to distort. The amount of distortion depends on the nature of the dielectric material and this is measured by the permittivity of the material.
Permittivity is quoted for any particular material as RELATIVE PERMITTIVITY, which is a measure of how efficient a dielectric material is. It is a number without units which indicates how much greater the permittivity of the material is than the permittivity of air (or a vacuum), which is given a permittivity of 1 (one). For example, if a dielectric material such as mica has a relative permittivity of 6, this means the capacitor will have a permittivity, and so a capacitance, six times that of one whose dimensions are the same, but whose dielectric is air.
Another important aspect of the dielectric is the DIELECTRIC STRENGTH. this indicates the ability of the dielectric to withstand the voltage placed across it when the capacitor is charged. Ideally the dielectric must by as thin as possible, so giving the maximum capacitance for a given size of component. However, the thinner the dielectric layer, the more easily its insulating properties will break down. The dielectric strength therefore governs the maximum working voltage of a capacitor.
Maximum Working Voltage (VDCwkg max)
It is very important when using capacitors that the maximum working voltage indicated by the manufacturer is not exceeded. Otherwise there will be a great danger of a sudden insulation breakdown within the capacitor. As it is likely that a maximum voltage existed across the capacitor at this time (hence the breakdown) large currents will flow with a real risk of fire or explosion in some circuits.
Charge on a Capacitor.
The charge (Q) on a capacitor depends on a combination of the above factors, which can be given together as the Capacitance (C) and the voltage applied (V). For a component of a given capacitance, the relationship between voltage and charge is constant. Increasing the applied voltage results in a proportionally increased charge. This relationship can be expressed in the formula;
Q = CV
C = Q/V
V = Q/C
Where V is the voltage applied, in Volts.
C is the capacitance in Farads.
Q is the quantity of charge in Coulombs.
So any of these quantities can be found provided the other two are known. The formulae can easily be re-arranged using a simple triangle similar to the one used for calculating Ohm´s Law when carrying out resistor calculations. |
Origin of the Christmas Tree
A SCANDINAVIAN myth of great antiquity speaks of a “service tree”
…sprung from the blood-drenched soil where two lovers had been killed by violence. At certain nights in the Christmas season mysterious lights were seen flaming in its branches, that no wind could extinguish.
One tale describes Martin Luther as attempting to explain to his wife and children the beauty of a snow-covered forest under the glittering star sprinkled sky. Suddenly, an idea suggested itself. He went into the garden, cut off a little fir-tree, dragged it into the nursery, put some candles on its branches and lighted them.
“It has been explained,” says another authority,
“as being derived from the ancient Egyptian practice of decking houses at the time of the winter solstice with branches of the date palm –the symbol of life triumphant over death, and therefore of perennial life in the renewal of each bounteous year.” The Egyptians regarded the date palm as the emblem not only of immortality, but also of the starlit firmament.
Some of its traditions may have been strongly influenced by the fact that about this time the Jews celebrated their Feast of Hanukkah or Lights, known also as the Feast of Dedication, of which lighted candles are a feature. In Germany, the name for Christmas Eve is Weihnacht, the Night of Dedication, while in Greece at about this season the celebration is called the Feast of Lights.
As a regular institution, however, it can be traced back only to the sixteenth century. During the Middle Ages it suddenly appears in Strassburg; it maintained itself along the Rhine for two hundred years, when suddenly at the beginning of the nineteenth century the fashion spread all over Germany, and by fifty years later had conquered Christendom.
By W. S. Walsh, “Curiosities of Popular Customs” (condensed)
–by Martin Luther
GOOD news from heaven the angels bring,
Glad tidings to the earth they sing:
To us this day a child is given,
To crown us with the joy of heaven.
This is the Christ, our God and Lord,
Who in all need shall aid afford:
He will Himself our Savior be.
From sin and sorrow set us free.
The Book of Christmas
To us that blessedness He brings,
Which from the Father’s bounty springs:
That in the heavenly realm we may
With Him enjoy eternal day.
Were earth a thousand times as fair,
Beset with gold and jewels rare,
She yet were far too poor to be
A narrow cradle. Lord, for Thee.
Ah, dearest Jesus, Holy Child!
Make Thee a bed, soft, undefiled,
Within my heart, that it may be
A quiet chamber kept for Thee.
Praise God upon His heavenly throne,
Who gave to us His only Son:
For this His hosts, on joyful wing,
A blessed New Year of mercy sing. |
To a casual stargazer, most stars look the same. Some may be brighter, and others dimmer, but for the most part, they are all variations on the same theme. All stars are so far away that we see them only as pinpoints of light, often twinkling because of turbulence in Earth’s atmosphere.
If you closely compare some stars, however, you may notice that they are different colors. Some hot young stars, like most of the stars in the constellation Orion, are bluish in color. Other old cool stars, like Betelgeuse in the upper-left corner of Orion, are noticeably orange or red. The sun is somewhere in between, with its yellow-white hue. This Thanksgiving, look to the sky to find some amazing red stars shining brightly.
Color differences in stars reflect the temperature of the surfaces of these cosmic objects. Most people think of red as being hot, but if you think of a piece of iron heating in a forge, it turns slowly from dark red to white to blue. So, while it might seem counterintuitive at first, red stars are actually relatively cooler, and blue stars are incredibly hot. [Best Night Sky Events of November 2014 (Images)]
Winter evenings are a great time to study the colors of stars. The constellation Orion is a wonderful example. This is an active star-forming region in our part of the Milky Way, so most of the stars in Orion are young: hot and blue.
The major exception is Betelgeuse, a very old star classified as a red supergiant. It is in a late stage of evolution, and most of its hydrogen gas has been converted to helium. The star has grown to be huge — 1,000 times the diameter of the sun — but it has only about 10 times the sun's mass. As a result, Betelgeuse has very low density and a very tenuous atmosphere. Large spots on its surface cause it to vary in brightness.
In fact, Betelgeuse is one of the most likely candidates to collapse in on itself and become a supernova in the "near future." But don't be alarmed: "Near future" in the astronomical time scale could be 10,000 years.
Farther north in the sky is a more typical red giant star, Aldebaran in Taurus. Aldebaran looks much like the sun will look in about 5 billion years, when it has converted most of its hydrogen to helium. Aldebaran is only slightly more massive than the sun, but it has swollen to 44 times its diameter. Aldebaran doesn't have enough mass to become a supernova. Instead, it will shed its outer layers to eject a planetary nebula, like the Ring Nebula in Lyra.
The third red star is a very different creature. R Leporis is also a red giant, but its internal fusion processes have produced a lot of carbon. This "sooty" atmosphere causes stars like this to glow deep red.
Like most red stars, R Leporis is variable in brightness, varying from 5.5 to 11th magnitude over a period of about 427 days. Now is a good time to observe it because it is close to maximum brightness at magnitude 6.5, so it's easily visible in binoculars.
You can use a line from Mintaka, the right-most star in Orion's Belt, through Rigel, as a guide to find R Leporis, or use Arneb and Mu Leporis in the constellation Lepus. You will know you've found it by its deep red color.
This star was discovered in 1845 by J. R. Hind, who described it as looking "like a drop of blood on a black field."
Having found one "carbon star," you may want to track down others. The Astronomical League has a program that challenges observers to locate a list of 100 carbon stars.
Editor's Note: If you have an amazing image of red stars in the night sky or any other skywatching photo you'd like to share for a possible story or image gallery, please contact managing editor Tariq Malik at [email protected].
This article was provided to Space.com by Simulation Curriculum, the leader in space science curriculum solutions and the makers of Starry Night and SkySafari. Follow Starry Night on Twitter @StarryNightEdu. Follow us @Spacedotcom, Facebook and Google+. Original article on Space.com. |
Nationalities in the USSR
Three 90 minute block periods and DBQ as an independent assignment.
- Sufficient copies of the introductory essay for all class members
- Primary Source Worksheet for each student
- Index cards
- Sufficient copies of all of the primary sources for all of the class
- Debate Guidelines for each student
By the end of the lesson, students will be able to:
- understand the concepts of nationalities and nationalism within an overreaching concept of empire.
- understand the characteristics of the Soviet Union as an empire.
- understand the concept of totalitarianism as employed by the government of the Soviet Union which allowed for control of nationalist divisions in multiethnic Soviet republics until unleashed by Gorbachev’s introduction of glasnost and perestroika.
- demonstrate understanding of the historic reasons for separatism related to the subjugation of nationalities within their own borders by the Soviet Union.
- demonstrate understanding that nationalities lived as second-class citizens under Russian domination within the Soviet Union despite the fact that they comprised a majority within their individual republics.
- demonstrate understanding of the role of religion in the nationalities move for separation from the Soviet Union.
- demonstrate understanding that the nationalities sometimes worked against their own economic self-interest in pursuing dissolution of the Soviet Union.
- demonstrate understanding that that the Soviet Union responded differently to nationalist uprising in 1989 and 1991 to their response in 1968 in Czechoslovakia and other earlier uprisings.
- analyze primary source documents to understand the role of nationalities in pressuring the Soviet Union to set individual republics free.
- analyze primary source documents to see the Communist party responses to nationalist demands for separatism.
- utilize research and debate techniques to come to a fuller understanding of the role of nationalities in breaking with the Soviet Union.
1.(Day 1) Opening Activity: Discuss the following questions:
- In a multi-ethnic, multi-religious region, is an empire with vast, centralized power the best form of government possible? Why or why not?
- What allowed for long-term success and then for ultimate failure of some of the historical multi-ethnic, multi-religious empires such as the Hapsburg’s Austria-Hungary?
- In what ways was did the Soviet Union exhibit the same strengths and weaknesses of earlier historical empires?
- Is the world moving in the direction of globalization and Supranations (EU, NAFTA, UN) or is it breaking up into small nations through “tribalism” (dissolution of Yugoslavia, USSR, African nations)? Which direction is preferable for the future of the world and why? Cite examples.
2. Small Group Activity: This lesson will provide research-based debate to illuminate the role of nationalities in the breakup of the Soviet Union. After the introduction, each group will spend the remainder of the first 90 minute block reading the introduction, Nationalities in the USSR, found in this module. All students will work in groups of two to compare two primary sources: Post-Soviet population table (2006) and Commonwealth of Independent States map (1994). They will answer the following questions based on information in the introductory essay and from the maps:
- Which republics broke away from the Soviet Union and why?
- Which major ethnic groups stayed within the new Russia and why?
- Following the dissolution of the Soviet empire in 1991, which republics were the most diverse? Which were the least diverse? Which have the best chances of economic success and why?
Next, students will be assigned to one of the following six groups and given index cards:
Analysis of primary documents: Students will choose one of the following themes so that all themes are covered within their group. They will write the theme at the top of their index cards. On the cards, they will record the following relevant information for one theme:
- ETHNICITY: Ethnic makeup within their assigned republic. Include percentages of top groups.
- RELIGION: Major religious groups within the assigned republic and the extent to which religion was practiced before and during the Soviet period.
- HISTORY: Date and general circumstances of the region’s incorporation into the Russian or Soviet empire.
- ECONOMY: Essential economic facts such as primary natural resource or industrial product.
3. (Day 2) Report: At the beginning of the second 90 minute block, students will report to the class by region and theme. Individual students will briefly (no more than three minutes each) report out their findings to the class. Students will record findings on the group report worksheet.
4. Jigsaw: Regardless of previous group placement, students will then jigsaw out into groups based on their assignment to a “Soviet Union” group or a “Nationalities” group.
Each student in the Soviet group will be given Primary Sources: Soviet population table by nationality (1970), Ukrainian Central Committee on Ethnic Issues, Gorbachev’s TV Address on Interethnic Relations, Turkmen Party’s Niazov Discusses Ethnic Issues, Uzbek Minister on Restoring Order in Tashkent, and the Commonwealth of Independent States map (1994).
The Nationalities group will be given Primary Sources: Soviet population table by nationality (1970), Communist Party’s Role in Estonia’s Fate Revealed, Latvian Group Wants Full Political Independence, Lithuanian Communist Party Declares Independence, and the Commonwealth of Independent States map (1994).
5. (Day 3) Analysis: Within the groups assigned previously, students will be given 45 minutes to analyze each of the documents. They may divide the documents between members of the group. They will be looking for arguments within the documents pertaining to why the Soviet Union may have been the best form of government for a multi-ethnic and religious region or whether regions should be given freedom to find their own best solutions. Each group will complete a primary source analysis sheet for each document they have been given.
6. Debate: Teacher- conducted debate (45 minutes)
Students will be given a debate guidelines sheet and will be given three minutes to make their point, two minutes for counter-point. They may incorporate the key points in the primary sources AND the research information.
Resolved: In 1991, the future of the Soviet republics and their multiple nationalities and ethnicities would have been better served by remaining under the reformed empire of the USSR.
Document Based Questions:
To what extent did divergent nationalities play a role in the break-up of the Soviet Union? Use three different republics to illustrate your answer.
To what extent did divergent nationalities play a role in the break-up of the Soviet Union? Use the examples of Latvia, Lithuania and Estonia to analyze the extent to which nationalist and anti-Soviet sentiment supported division and undermined the republic’s economic and political self-interest.
Advanced students may be given all of the documents and select the ones which support their argument for pro-soviet or pro-nationality/free republics. Their research should reflect greater depth of information and analysis.
Enrichment activities include a look at the Eastern European breaks with the Soviet Union and comparisons to events in 1981 when the Soviet republics broke away from the Soviet Union. Students can research and evaluate events in Chechnya and explain the reasons that separation was not allowed for Chechnya whereas it had occurred in these republics in 1981.
Less advanced students will be assigned an essay which explains why many of
the republic states joined eastern European nations in breaking away from the
Soviet Union. They may also be asked to formulate their ideas of a “perfect
government” for a multi-national, multi-ethnic nation. Finally, they may
summarize their ideas following the lesson that best answer the questions in |
The frigid seabottom off Antarctica holds a surprising riot of life: colorful carpets of sponges, starfish, sea cucumbers and many other soft, bottom-dwelling animals,shown on images from robotic submarines. Now, it appears that many such communities could fast disappear, due to warming climate. Scientists sailing on an icebreaker last year have just published a study showing that gigantic king crabs have spread into warming Antarctic bottom waters, where they are devouring everything in sight and altering the structure of the seafloor itself. The study, in the current issue of Proceedings of the Royal Society B, was led by biologist Craig Smith of the University of Hawaii and coauthored by oceanographer Bruce Huber of Columbia University’s Lamont-Doherty Earth Observatory, who served as co-chief scientist on the cruise.
Like related northern king crabs popular in seafood restaurants, the southern species Neolithodes yaldwyni can can measure three feet across and has powerful claws for crushing prey. It was previously known to live in Antarctica’s Ross Sea, south of New Zealand. But during the recent cruise, the crabs were found in the Palmer Deep, a distant basin in Antarctica’s shallow continental shelf, south of South America. The researchers think the crabs arrived there only in the last 30 or 40 years, after being held back for millions of years by the cold, shallow water of the Antarctic continental shelf. (Due to the way ocean waters circulate, warmer masses come in from the north and sink, while shallower waters around most of Antarctica remain mostly colder.) To live, the crabs need water warmer than about 34.5 degrees F; and oceanographic measurements show that the Palmer Deep has been heating by about 0.018 degrees a year for at least the last 30 years–enough, apparently, to give larva floating in over the shelf a foothold.
The team estimates that some 1.5 million crabs are now living in just one small area of the Palmer Deep below 3,100 feet. Here, they have virtually wiped out invertebrates including sea lilies, urchins, brittle stars, sea cucumbers and other creatures. Their spindly claws also overturn previously undisturbed sediments, which may fundamentally alter the ecosystem by changing microbial communities and the availability of organic matter. Shallower waters along the nearby Antarctic Peninsula still hold abundant invertebrates–but the peninsula and its coastal waters comprise one of the fastest-warming places in the world. If heating continues at its current rate, the scientists expect the crabs to reach these areas within just 10 or 20 years. “It looks like a pretty negative consequence of climate warming in the Antarctic,” Smith told the online news service LiveScience
This is only one of many reports to suggest that climate is already driving shifts in ecosystems. Other studies in recent years have indicated that migratory birds in Europe are traveling north earlier; plant and tree species in North America are slowly marching poleward; and alpine flowers and high-elevation animals such as pikas are being driven to ever-higher altitudes The Antarctic crab invasion “is likely to serve as an important model for the potential invasive impacts of crushing predators,” writes Smith. |
Nazi social policies were strongly influenced by the eugenics movement. Eugenics was a social theory popular with many scientists, philosophers, academics and writers in the early 20th century. Their fundamental belief was that human populations could be improved through manipulation of their genetic make-up. In other words, a society could achieve positive outcomes – like increased productivity or reductions in crime – if it removed unhealthy or ‘undesirable’ genetic elements. Many governments had experimented with eugenics-driven policies long before the Nazis came to power. More than 64,000 Americans with mental illnesses were forcibly sterilised between the 1890s and 1924. Other countries – such as Japan, Canada, Australia, Sweden, France and Switzerland – also dabbled in eugenics-based policies in the 1920s and 1930s.
Hitler, other Nazis and some German academics were also avid believers in eugenic pseudo-science. They thought of German society as a sick organism, its bloodstream contaminated by degenerate and undesirable elements. Those ‘contaminating’ Germany were the racially impure, the physically disabled, the mentally infirm, the criminally minded and the sexually aberrant. The Nazis believed the state should intervene to improve the health of its society – first to identify its contaminating elements, then to restrict their growth, then to eliminate them. This required difficult and unpalatable policies – but the Nazis justified it with eugenics theories and references to social Darwinism (the ‘survival of the fittest’).
The first Nazi eugenics policy, the Law for the Prevention of Hereditarily-Diseased Offspring, was passed in July 1933, six months after Hitler became chancellor. It required German doctors to register all genetically-related illnesses, in all patients other than women over 45. Examples of reportable cases were mental retardation, schizophrenia, manic-depression, blindness and deafness, or other severe physical deformities. Even chronic alcoholism could be considered a genetic disorder, at the doctor’s discretion. The law also set up ‘hereditary health courts’, comprised of two physicians and a lawyer. These courts examined individual cases and ruled whether patients should be “rendered incapable of procreation” (surgically sterilised).
When the law came into effect on the first day of 1934, the ‘hereditary health courts’ were swamped with cases. In their first three years the ‘health courts’ ruled on almost 225,000 patients, ordering compulsory sterilisation for around 90 per cent. Sterilisation orders were handed down so rapidly that state hospitals did not have the operating theatres or staff to keep up. The vast majority of sterilised patients were suffering from mental illness or deformity. Of the patients sterilised in 1934, 53 per cent were intellectually disabled or ‘feeble-minded’, 25 per cent schizophrenics and 14 per cent epileptics. In total, the Nazi ‘health courts’ approved the forced sterilisation of more than 300,000 people between 1934 and 1945.
In October 1935, a month after the passing of the Nuremberg Laws, the Nazis introduced the Law for the Protection of the Genetic Health of the German People. This reform was chiefly concerned with preventing marriages which might produce ‘genetically unhealthy’ children. Couples wishing to marry had to first obtain a certificate from the public health office, declaring the proposed marriage would not produce genetically impure offspring. Germans with genetic disorders or disabilities were only given permission to marry if they volunteered for sterilisation. The law also allowed the Nazi bureaucracy to collect a considerable amount of information about the racial and genetic make-up of its citizens. Its long-term plan was to compile a racial and genetic blueprint of the entire nation (a project never completed because of World War II).
‘Life unworthy of living’
The final, most drastic phase of the Nazi eugenics program was euthanasia. Killing the unhealthy to protect public health had been proposed as early as 1920, by two German writers: psychiatrist Alfred Hoche and philosopher Karl Binding. The mentally disabled, Hoche and Binding argued, possessed only lebensunwertem lebens (‘life unworthy of living’); legalised euthanasia would end the “burden for society and their families”. While many Nazis supported introducing euthanasia, Hitler was wary, because he knew approving the medical killing of the disabled would generate considerable public opposition. In 1936 Hitler told his inner circle that euthanasia was a policy that would have to wait until wartime, when it could be introduced with less fuss.
By 1939 Hitler was confident enough to authorise a trial euthanasia program. This may have been triggered by an emotional letter, written to the fuhrer by a Herr Knauer the previous year. Knauer’s baby son had been born blind, intellectually disabled, missing one arm and one leg. Knauer begged Hitler to allow doctors to carry out a mercy killing of his deformed son; after a few weeks’ thought the Nazi leader approved this request. In mid-1939 Hitler ordered a hand-picked group of doctors to prepare a euthanasia schedule for similarly deformed children.
The secret genocide
Ruth Hubbard, biologist
On September 1st 1939 – the day German tanks rumbled into Poland – Hitler signed an informal memo allowing specially-appointed doctors to deal with “incurable” patients by “granting [a] mercy death after a discerning diagnosis”. This memo unleashed Aktion T4: a program to clear hospitals and free up resources by euthanasing the mentally disabled. Aktion T4 was preceded by a vigorous propaganda campaign, intended to prepare the public and lessen sympathy for its victims. Posters depicted cripples and lunatics as drains on the state; they took up valuable resources needed for front-line soldiers and hungry children. Each disabled person, these posters claimed, cost the state 60,000 Reichmarks, a burden carried by the German taxpayer.
Aktion T4 began with the killing of disabled children, who were dispatched by starvation or cocktails of lethal drugs. The euthanasing of adult patients began in hospitals in occupied Poland, then spread into Germany proper. In places where Catholic doctors and nurses refused to carry out the killings, special T4 squads were sent in to take over. The Nazis initially attempted to keep Aktion T4 a secret, listing phoney causes of death on official paperwork – but most Germans were aware of what was occurring. Aktion T4 continued until August 1941, when Hitler suspended it, largely because of a chorus of public complaints. The program had taken the lives of between 80,000 and 100,000 patients. The killing of the infirm continued to be carried out in German hospitals on an ad hoc basis.
1. Eugenics is a movement that believes societies can be strengthened by genetic management and refinement.
2. The Nazis were strong adherents of eugenics, though they neither invented it nor were the first to implement it.
3. In July 1933 they authorised a program of compulsory sterilisation for those with ‘hereditary illnesses’.
4. There were also close restrictions on marriage, with government certification for ‘genetic viability’.
5. The Nazi euthanasia program, Aktion T4, ran for two years and saw as many as 100,000 patients murdered.
This page was written by Jennifer Llewellyn, Jim Southey and Steve Thompson. To reference this page, use the following citation:
J. Llewellyn et al, “Nazi eugenics”, Alpha History, accessed [today’s date], http://alphahistory.com/nazigermany/nazi-eugenics/. |
Gaius Julius Caesar (13 July 100 BC – 15 March 44 BC) was a Roman general and statesman. He played a critical role in the transformation of the Roman Republic into the Roman Empire.
In 55 BCE, Julius Caesar was accompanied by Orlando (then known as Vito), who is enroll in the Roman army, during his attempted invasion of Britain. The invasion was met with strong resistance from its barbaric inhabitants, which successfully resisted Caesar's incursion.
Caesar subsequently return to Rome and became dictator of his empire. In response, a group of senators, led by Marcus Junius Brutus, assassinated the dictator on the Ides of March (15 March) 44 BC, hoping to restore the constitutional government of the Republic. Orlando said to be present at Caesar's assassination.
However, the assassination result in a series of civil wars, which ultimately led to the establishment of the permanent Roman Empire by Caesar's adopted heir Octavius (later known as Augustus). |
William Whewell (1794–1866) was one of the most important and influential figures in nineteenth-century Britain. Whewell, a polymath, wrote extensively on numerous subjects, including mechanics, mineralogy, geology, astronomy, political economy, theology, educational reform, international law, and architecture, as well as the works that remain the most well-known today in philosophy of science, history of science, and moral philosophy. He was one of the founding members and a president of the British Association for the Advancement of Science, a fellow of the Royal Society, president of the Geological Society, and longtime Master of Trinity College, Cambridge. In his own time his influence was acknowledged by the major scientists of the day, such as John Herschel, Charles Darwin, Charles Lyell and Michael Faraday, who frequently turned to Whewell for philosophical and scientific advice, and, interestingly, for terminological assistance. Whewell invented the terms “anode,” “cathode,” and “ion” for Faraday. In response to a challenge by the poet S.T. Coleridge in 1833, Whewell invented the English word “scientist;” before this time the only terms in use were “natural philosopher” and “man of science”. Whewell was greatly influenced by his association with three of his fellow students at Cambridge: Charles Babbage, John Herschel, and Richard Jones. Over the winter of 1812 and spring of 1813, the four met for what they called "Philosophical Breakfasts" at which they discussed induction and scientific method, among other topics (see Snyder, 2011).
Whewell is most known today for his massive works on the history and philosophy of science. His philosophy of science was attacked by John Stuart Mill in his System of Logic, causing an interesting and fruitful debate between them over the nature of inductive reasoning in science, moral philosophy, and political economy (for a detailed discussion of this debate, see Snyder 2006). It is in the context of the debate over philosophy of science that Whewell's philosophy was rediscovered in the 20th century by critics of logical positivism. In this entry, we shall focus on the most important philosophical aspects of Whewell's works: his philosophy of science, including his views of induction, confirmation, and necessary truth; his view of the relation between scientific practice, history of science, and philosophy of science; and his moral philosophy. We shall spend the most time on his view of induction, as this is the most interesting and important part of his philosophy as well as the most misinterpreted.
- 1. Biography
- 2. Philosophy of Science: Induction
- 3. Philosophy of Science: Confirmation
- 4. Philosophy of Science: Necessary Truth
- 5. The Relation Between Scientific Practice, History of Science, and Philosophy of Science
- 6. Moral Philosophy
- Academic Tools
- Other Internet Resources
- Related Entries
Whewell was born in 1794, the eldest child of a master-carpenter in Lancaster. The headmaster of his local grammar school, a parish priest, recognized Whewell's intellectual abilities and persuaded his father to allow him to attend the Heversham Grammar School in Westmorland, some twelve miles to the north, where he would be able to qualify for a closed exhibition to Trinity College, Cambridge. In the 19th century and earlier, these “closed exhibitions” or scholarships were set aside for the children of working class parents, to allow for some social mobility. Whewell studied at Haversham Grammar for two years, and received private coaching in mathematics. Although he did win the exhibition it did not provide full resources for a boy of his family's means to attend Cambridge; so money had to be raised in a public subscription to supplement the scholarship money.
He thus came up to Trinity in 1812 as a “sub-sizar” (scholarship student). In 1814 he won the Chancellor's prize for his epic poem “Boadicea,” in this way following in the footsteps of his mother, who had published poems in the local papers. Yet he did not neglect the mathematical side of his training; in 1816 he proved his mathematical prowess by placing as both second Wrangler and second Smith's Prize man. The following year he won a college fellowship. He was elected to the Royal Society in 1820, and ordained a priest (as required for Trinity Fellows) in 1825. He took up the Chair in Mineralogy in 1828, and resigned it in 1832. In 1838 Whewell became Professor of Moral Philosophy. Almost immediately after his marriage to Cordelia Marshall on 12 October 1841, he was named Master of Trinity College upon the recommendation of the Prime Minister Robert Peel. He was Vice-Chancellor of the University in 1842 and again in 1855. In 1848 he played a large role in establishing the Natural and Moral Sciences Triposes at the University. His first wife died in 1855, and he remarried Lady Affleck, the sister of his friend Robert Ellis; Lady Affleck died in 1865. Whewell left no descendents when he died, after being thrown from his horse, on 6 March 1866. (More details about Whewell's life and times can be found in Snyder 2011.)
According to Whewell, all knowledge has both an ideal, or subjective dimension, as well as an objective dimension. He called this the “fundamental antithesis” of knowledge. Whewell explained that “in every act of knowledge … there are two opposite elements, which we may call Ideas and Perceptions” (1860a, 307). He criticized Kant and the German Idealists for their exclusive focus on the ideal or subjective element, and Locke and the “Sensationalist School” for their exclusive focus on the empirical, objective element. Like Francis Bacon, Whewell claimed to be seeking a “middle way” between pure rationalism and ultra-empiricism. Whewell believed that gaining knowledge requires attention to both ideal and empirical elements, to ideas as well as sensations. These ideas, which he called “Fundamental Ideas,” are “supplied by the mind itself”—they are not (as Mill and Herschel protested) merely received from our observations of the world. Whewell explained that the Fundamental Ideas are “not a consequence of experience, but a result of the particular constitution and activity of the mind, which is independent of all experience in its origin, though constantly combined with experience in its exercise” (1858a, I, 91). Consequently, the mind is an active participant in our attempts to gain knowledge of the world, not merely a passive recipient of sense data. Ideas such as Space, Time, Cause, and Resemblance provide a structure or form for the multitude of sensations we experience. The Ideas provide a structure by expressing the general relations that exist between our sensations (1847, I, 25). Thus, the Idea of Space allows us to apprehend objects as having form, magnitude, and position. Whewell held, then, that observation is “idea-laden;” all observation, he noted, involves “unconscious inference” using the Fundamental Ideas (see 1858a, I, 46). Each science has a Particular Fundamental idea which is needed to organize the facts with which that science is concerned; thus, Space is the Fundamental Idea of geometry, Cause the Fundamental Idea of mechanics, and Substance the Fundamental Idea of chemistry. Moreover, Whewell explained that each Fundamental Idea has certain “conceptions” included within it; these conceptions are “special modifications” of the Idea applied to particular types of circumstances (1858b, 187). For example, the conception of force is a modification of the Idea of Cause, applied to the particular case of motion (see 1858a, I, 184–5 and 236).
Thus far, this discussion of the Fundamental Ideas may suggest that they are similar to Kant's forms of intuition, and indeed there are some similarities. Because of this, some commentators argue that Whewell's epistemology is a type of Kantianism (see, e.g., Butts 1973, and Buchdahl 1991). However, this interpretation ignores several crucial differences between the two views. Whewell did not follow Kant in drawing a distinction between “precepts,” or forms of intuition, such as Space and Time, and the categories, or forms of thought, in which Kant included the concepts of Cause and Substance. Moreover, Whewell included as Fundamental Ideas many ideas which function not as conditions of experience but as conditions for having knowledge within their respective sciences: although it is certainly possible to have experience of the world without having a distinct idea of, say, Chemical Affinity, we could not have any knowledge of certain chemical processes without it. Unlike Kant, Whewell did not attempt to give an exhaustive list of these Fundamental Ideas; indeed, he believed that there are others which will emerge in the course of the development of science. Moreover, and perhaps most importantly for his philosophy of science, Whewell rejected Kant's claim that we can only have knowledge of our “categorized experience.” The Fundamental Ideas, on Whewell's view, accurately represent objective features of the world, independent of the processes of the mind, and we can use these Ideas in order to have knowledge of these objective features. Indeed, Whewell criticized Kant for viewing external reality as a “dim and unknown region” (see 1860a, 312). Further, Whewell's justification for the presence of these concepts in our minds takes a very different form than Kant's transcendental argument. For Kant, the categories are justified because they make experience possible. For Whewell, though the categories do make experience (of certain kinds) possible, the Ideas are justified by their origin in the mind of a divine creator (see especially his discussion of this in his 1860a). And finally, the type of necessity which Whewell claimed is derived from the Ideas is very different from Kant's notion of the synthetic a priori (We return to these last two points in the section on Necessary Truth below).
We turn now to a discussion of the theory of induction Whewell developed with his antithetical epistemology. From his earliest thoughts about scientific method, Whewell was interested in developing an inductive theory. At their philosophical breakfasts at Cambridge, Whewell, Babbage, Herschel and Jones discussed how science had stagnated since the heady days of the Scientific Revolution in the 17th century. It was time for a new revolution, which they pledged to bring about. The cornerstone of this new revolution was to be the promotion of a Baconian-type of induction, and all four men began their careers endorsing an inductive scientific method against the deductive method being advanced by David Ricardo and his followers (see Snyder 2011). (Although the four agreed about the importance of an inductive scientific method, Whewell's version was one that Herschel and Jones would later take issue with, primarily because of his antithetical epistemology.)
Whewell's first explicit, lengthy discussion of induction is found in his Philosophy of the Inductive Sciences, founded upon their History, which was originally published in 1840 (a second, enlarged edition appeared in 1847, and the third edition appeared as three separate works published between 1858 and 1860). He called his induction “Discoverers' Induction” and explained that it is used to discover both phenomenal and causal laws. Whewell considered himself to be a follower of Bacon, and claimed to be “renovating” Bacon's inductive method; thus one volume of the third edition of the Philosophy is entitled Novum Organon Renovatum. Whewell followed Bacon in rejecting the standard, overly-narrow notion of induction that holds induction to be merely simple enumeration of instances. Rather, Whewell explained that, in induction, “there is a New Element added to the combination [of instances] by the very act of thought by which they were combined” (1847, II, 48). This “act of thought” is a process Whewell called “colligation.” Colligation, according to Whewell, is the mental operation of bringing together a number of empirical facts by “superinducing” upon them a conception which unites the facts and renders them capable of being expressed by a general law. The conception thus provides the “true bond of Unity by which the phenomena are held together” (1847, II, 46), by providing a property shared by the known members of a class (in the case of causal laws, the colligating property is that of sharing the same cause).
Thus the known points of the Martian orbit were colligated by Kepler using the conception of an elliptical curve. Often new discoveries are made, Whewell pointed out, not when new facts are discovered but when the appropriate conception is applied to the facts. In the case of Kepler's discovery, the observed points of the orbit were known to Tycho Brahe, but only when Kepler applied the ellipse conception was the true path of the orbit discovered. Kepler was the first one to apply this conception to an orbital path in part because he had, in his mind, a very clear notion of the conception of an ellipse. This is important because the fundamental ideas and conceptions are provided by our minds, but they cannot be used in their innate form. Whewell explained that “the Ideas, the germs of them at least, were in the human mind before [experience]; but by the progress of scientific thought they are unfolded into clearness and distinctness” (1860a, 373). Whewell referred to this “unfolding” of ideas and conceptions as the “explication of conceptions.” Explication is a necessary precondition to discovery, and it consists in a partly empirical, partly rational process. Scientists first try to clarify and make explicit a conception in their minds, then attempt to apply it to the facts they have precisely examined, to determine whether the conception can colligate the facts into a law. If not, the scientist uses this experience to attempt a further refinement of the conception. Whewell claimed that a large part of the history of science is the “history of scientific ideas,” that is, the history of their explication and subsequent use as colligating concepts. Thus, in the case of Kepler's use of the ellipse conception, Whewell noted that “to supply this conception, required a special preparation, and a special activity in the mind of the discoverer. … To discover such a connection, the mind must be conversant with certain relations of space, and with certain kinds of figures” (1849, 28–9).
Once conceptions have been explicated, it is possible to choose the appropriate conception with which to colligate phenomena. But how is the appropriate conception chosen? According to Whewell , it is not a matter of guesswork. Nor, importantly, is it merely a matter of observation. Whewell explained that “there is a special process in the mind, in addition to the mere observation of facts, which is necessary” (1849, 40). This “special process in the mind” is a process of inference. “We infer more than we see,” Whewell claimed (1858a, I, 46). Typically, finding the appropriate conception with which to colligate a class of phenomena requires a series of inferences, thus Whewell noted that discoverers's induction is a process involving a “train of researches” (1857/1873, I, 297). He allows any type of inference in the colligation, including enumerative, eliminative and analogical. Thus Kepler in his Astronomia Nova (1609) can be seen as using various forms of inference to reach the ellipse conception (see Snyder 1997a). When Augustus DeMorgan complained, in his 1847 logic text, about certain writers using the term “induction” as including “the use of the whole box of [logical] tools,” he was undoubtedly referring to his teacher and friend Whewell (see Snyder 2008).
After the known members of a class are colligated with the use of a conception, the second step of Whewell's discoverers' induction occurs: namely, the generalization of the shared property over the complete class, including its unknown members. Often, as Whewell admitted, this is a trivially simple procedure. Once Kepler supplied the conception of an ellipse to the observed members of the class of Mars' positions, he generalized it to all members of the class, including those which were unknown (unobserved), to reach the conclusion that “all the points of Mars' orbit lie on an ellipse with the sun at one focus.” He then performed a further generalization to reach his first law of planetary motion: “the orbits of all the planets lie on ellipses with the sun at one focus.”
We mentioned earlier that Whewell thought of himself as renovating Bacon's inductive philosophy. His inductivism does share numerous features with Bacon's method of interpreting nature: for instance the claims that induction must involve more than merely simple enumeration of instances, that science must be proceed by successive steps of generalization, that inductive science can reach unobservables (for Bacon, the “forms,” for Whewell, unobservable entities such as light waves or properties such as elliptical orbits or gravitational forces). (For more on the relation between Whewell and Bacon see Snyder 2006). Yet, surprisingly, the received view of Whewell's methodology in the 20th century has tended to describe Whewell as an anti-inductivist in the Popperian mold (see, for example, Butts 1987, Buchdahl 1991, Laudan 1980, Niiniluoto 1977, and Ruse 1975). That is, it is claimed that Whewell endorses a “conjectures and refutations” view of scientific discovery. However, it is clear from the above discussion that his view of discoverers' induction does not resemble the view asserting that hypotheses can be and are typically arrived at by mere guesswork. Moreover, Whewell explicitly rejects the hypothetico-deductive claim that hypotheses discovered by non-rational guesswork can be confirmed by consequentialist testing. For example, in his review of his friend Herschel's Preliminary Discourse on the Study of Natural Philosophy, Whewell argued, against Herschel, that verification is not possible when a hypothesis has been formed non-inductively (1831, 400–1). Nearly thirty years later, in the last edition of the Philosophy, Whewell referred to the belief that “the discovery of laws and causes of phenomena is a loose hap-hazard sort of guessing,” and claimed that this type of view “appears to me to be a misapprehension of the whole nature of science” (1860a, 274). In other mature works he noted that discoveries are made “not by any capricious conjecture of arbitrary selection” (1858a, I, 29) and explained that new hypotheses are properly “collected from the facts” (1849, 17). In fact, Whewell was criticized by David Brewster for not agreeing that discoveries, including Newton's discovery of the universal gravitation law, were typically made by accident.
Why has Whewell been misinterpreted by so many modern commentators? One reason has to do with the error of reading certain terms used by Whewell in the 19th century as if they held the same meaning they have in the 20th and 21st. Thus, since Whewell used the terms “conjectures” and “guesses,” we are told that he shares Popper's methodology. Whewell made mention, for instance, of the “happy guesses” made by scientists (1858b, 64) and claimed that “advances in knowledge” often follow “the previous exercise of some boldness and license in guessing” (1847, II, 55). But Whewell often used these terms in a way which connotes a conclusion which is simply not conclusively confirmed. The Oxford English Dictionary tells us that prior to the 20th century the term “conjecture” was used to connote not a hypothesis reached by non-rational means, but rather one which is “unverified,” or which is “a conclusion as to what is likely or probable” (as opposed to the results of demonstration). The term was used this way by Bacon, Kepler, Newton, and Dugald Stewart, writers whose work was well-known to Whewell. In other places where Whewell used the term “conjecture” he suggests that what appears to be the result of guesswork is actually what we might call an “educated guess,” i.e., a conclusion drawn by (weak) inference. Whewell described Kepler's discovery, which seems so “capricious and fanciful” as actually being “regulated” by his “clear scientific ideas” (1857/1873, I, 291–2). Finally Whewell's use of the terminology of guessing sometimes occurs in the context of a distinction he draws between the generation of a number of possible conceptions, and the selection of one to superinduce upon the facts. Before the appropriate conception is found, the scientist must be able to call up in his mind a number of possible ones (see 1858b, 79). Whewell noted that this calling up of many possibilities “is, in some measure, a process of conjecture.” However, selecting the appropriate conception with which to colligate the data is not conjectural (1858b, 78). Thus Whewell claimed that the selection of the conception is often “preluded by guesses” (1858b, xix); he does not, that is, claim that the selection consists in guesswork. When inference is not used to select the appropriate conception, the resulting theory is not an “induction,” but rather a “hasty and imperfect hypothesis.” He drew such a distinction between Copernicus' heliocentric theory, which he called an induction, and the heliocentric system proposed by Aristarchus in the third century b.c., to which he referred as a hasty and imperfect hypothesis (1857/1873, I, 258).
Thus Whewell's philosophy of science cannot be described as the hypothetico-deductive view. It is an inductive method; yet it clearly differs from the more narrow inductivism of Mill. Whewell's view of induction has the advantage over Mill's of allowing the inference to unobservable properties and entities. (For more detailed arguments against reading Whewell as a hypothetico-deductivist, see Snyder 2006 and 2008).
On Whewell's view, once a theory is invented by discoverers' induction, it must pass a variety of tests before it can be considered confirmed as an empirical truth. These tests are prediction, consilience, and coherence (see 1858b, 83–96). These are characterized by Whewell as, first, that “our hypotheses ought to fortel [sic] phenomena which have not yet been observed” (1858b, 86); second, that they should “explain and determine cases of a kind different from those which were contemplated in the formation” of those hypotheses (1858b, 88); and third that hypotheses must “become more coherent” over time (1858b, 91).
We start by discussing the criterion of prediction. Our hypotheses ought to foretell phenomena, “at least all phenomena of the same kind,” Whewell explained, because “our assent to the hypothesis implies that it is held to be true of all particular instances. That these cases belong to past or to future times, that they have or have not already occurred, makes no difference in the applicability of the rule to them. Because the rule prevails, it includes all cases” (1858b, 86). Whewell's point here is simply that since our hypotheses are in universal form, a true hypothesis will cover all particular instances of the rule, including past, present, and future cases. But he also makes the stronger claim that successful predictions of unknown facts provide greater confirmatory value than explanations of already-known facts. Thus he held the historical claim that “new evidence” is more valuable than “old evidence.” He believed that “to predict unknown facts found afterwards to be true is … a confirmation of a theory which in impressiveness and value goes beyond any explanation of known facts” (1857/1873, II, 557). Whewell claimed that the agreement of the prediction with what occurs (i.e., the fact that the prediction turns out to be correct), is “nothing strange, if the theory be true, but quite unaccountable, if it be not” (1860a, 273–4). For example, if Newtonian theory were not true, he argued, the fact that from the theory we could correctly predict the existence, location and mass of a new planet, Neptune (as did happen in 1846), would be bewildering, and indeed miraculous.
An even more valuable confirmation criterion, according to Whewell, is that of “consilience.” Whewell explained that “the evidence in favour of our induction is of a much higher and more forcible character when it enables us to explain and determine [i.e., predict] cases of a kind different from those which were contemplated in the formation of our hypothesis. The instances in which this have occurred, indeed, impress us with a conviction that the truth of our hypothesis is certain” (1858b, 87–8). Whewell called this type of evidence a “jumping together” or “consilience” of inductions. An induction, which results from the colligation of one class of facts, is found also to colligate successfully facts belonging to another class. Whewell's notion of consilience is thus related to his view of natural classes of objects or events.
To understand this confirmation criterion, it may be helpful to schematize the “jumping together” that occurred in the case of Newton's law of universal gravitation, Whewell's exemplary case of consilience. On Whewell's view, Newton used the form of inference Whewell characterized as “discoverers' induction” in order to reach his universal gravitation law, the inverse-square law of attraction. Part of this process is portrayed in book III of the Principia, where Newton listed a number of “propositions.” These propositions are empirical laws that are inferred from certain “phenomena” (which are described in the preceding section of book III). The first such proposition or law is that “the forces by which the circumjovial planets are continually drawn off from rectilinear motions, and retained in their proper orbits, tend to Jupiter's centre; and are inversely as the squares of the distances of the places of those planets from that centre.” The result of another, separate induction from the phenomena of “planetary motion” is that “the forces by which the primary planets are continually drawn off from rectilinear motions, and retained in their proper orbits, tend to the sun; and are inversely as the squares of the distances of the places of those planets from the sun's centre.” Newton saw that these laws, as well as other results of a number of different inductions, coincided in postulating the existence of an inverse-square attractive force as the cause of various classes of phenomena. According to Whewell, Newton saw that these inductions “leap to the same point;” i.e., to the same law. Newton was then able to bring together inductively (or “colligate”) these laws, and facts of other kinds of events (e.g., the class of events known as “falling bodies”), into a new, more general law, namely the universal gravitation law: “All bodies attract each other with a force of gravity which is inverse as the squares of the distances.” By seeing that an inverse-square attractive force provided a cause for different classes of events—for satellite motion, planetary motion, and falling bodies—Newton was able to perform a more general induction, to his universal law.
What Newton found was that these different kinds of phenomena—including circumjovial orbits, planetary orbits, as well as falling bodies—share an essential property, namely the same cause. What Newton did, in effect, was to subsume these individual “event kinds” into a more general natural kind comprised of sub-kinds sharing a kind essence, namely being caused by an inverse-square attractive force. Consilience of event kinds therefore results in causal unification. More specifically, it results in unification of natural kind categories based on a shared cause. Phenomena that constitute different event kinds, such as “planetary motion,” “tidal activity,” and “falling bodies,” were found by Newton to be members of a unified, more general kind, “phenomena caused to occur by an inverse-square attractive force of gravity” (or, “gravitational phenomena”). In such cases, according to Whewell, we learn that we have found a “vera causa,” or a “true cause,” i.e., a cause that really exists in nature, and whose effects are members of the same natural kind (see 1860a, p. 191). Moreover, by finding a cause shared by phenomena in different sub-kinds, we are able to colligate all the facts about these kinds into a more general causal law. Whewell claimed that “when the theory, by the concurrences of two indications … has included a new range of phenomena, we have, in fact, a new induction of a more general kind, to which the inductions formerly obtained are subordinate, as particular cases to a general population” (1858b, 96). He noted that consilience is the means by which we effect the successive generalization that constitutes the advancement of science (1847, II, 74). (For more on consilience, and its relation to realism, see Snyder 2005 and 2006.)
Whewell discussed a further, related test of a theory's truth: namely, “coherence.” In the case of true theories, Whewell claimed, “the system becomes more coherent as it is further extended. The elements which we require for explaining a new class of facts are already contained in our system….In false theories, the contrary is the case” (1858b, 91). Coherence occurs when we are able to extend our hypothesis to colligate a new class of phenomena without ad hoc modification of the hypothesis. When Newton extended his theory regarding an inverse-square attractive force, which colligated facts of planetary motion and lunar motion, to the class of “tidal activity,” he did not need to add any new suppositions to the theory in order to colligate correctly the facts about particular tides. On the other hand, Whewell explained, when phlogiston theory, which colligated facts about the class of phenomena “chemical combination,” was extended to colligate the class of phenomena “weight of bodies,” it was unable to do so without an ad hoc and implausible modification (namely, the assumption that phlogiston has “negative weight”) (see 1858b, 92–3). Thus coherence can be seen as a type of consilience that happens over time; indeed, Whewell remarked that these two criteria—consilience and coherence—“are, in fact, hardly different” (1858b, 95).
A particularly intriguing aspect of Whewell's philosophy of science is his claim that empirical science can reach necessary truths. Explaining this apparently contradictory claim was considered by Whewell to be the “ultimate problem” of philosophy (see Morrison 1997). Whewell explained it by reference to his antithetical epistemology. Necessary truths are truths which can be known a priori; they can be known in this way because they are necessary consequences of ideas which are a priori. They are necessary consequences in the sense of being analytic consequences. Whewell explicitly rejected Kant's claim that necessary truths are synthetic. Using the example “7 + 8 = 15,” Whewell claimed that “we refer to our conceptions of seven, of eight, and of addition, and as soon as we possess the conceptions distinctly, we see that the sum must be 15.” That is, merely by knowing the meanings of “seven,” and “eight,” and “addition,” we see that it follows necessarily that “7 + 8 = 15” (1848, 471).
Once the Ideas and conceptions are explicated, so that we understand their meanings, the necessary truths which follow from them are seen as being necessarily true. Thus, once the Idea of Space is explicated, it is seen to be necessarily true that “two straight lines cannot enclose a space.” Whewell suggested that the first law of motion is also a necessary truth, which was knowable a priori once the Idea of Cause and the associated conception of force were explicated. This is why empirical science is needed to see necessary truths: because, as we saw above, empirical science is needed in order to explicate the Ideas. Thus Whewell also claimed that, in the course of science, truths which at first required experiment to be known are seen to be capable of being known independently of experiment. That is, once the relevant Idea is clarified, the necessary connection between the Idea and an empirical truth becomes apparent. Whewell explained that “though the discovery of the First Law of Motion was made, historically speaking, by means of experiment, we have now attained a point of view in which we see that it might have been certainly known to be true independently of experience” (1847, I, 221). Science, then, consists in the “idealization of facts,” the transferring of truths from the empirical to the ideal side of the fundamental antithesis. He described this process as the “progressive intuition of necessary truths.”
Although they follow analytically from the meanings of ideas our minds supply, necessary truths are nevertheless informative statements about the physical world outside us; they have empirical content. Whewell's justification for this claim is a theological one. Whewell noted that God created the universe in accordance with certain “Divine Ideas.” That is, all objects and events in the world were created by God to conform to certain of his ideas. For example, God made the world such that it corresponds to the idea of Cause partially expressed by the axiom “every event has a cause.” Hence in the universe every event conforms to this idea, not only by having a cause but by being such that it could not occur without a cause. On Whewell's view, we are able to have knowledge of the world because the Fundamental Ideas which are used to organize our sciences resemble the ideas used by God in his creation of the physical world. The fact that this is so is no coincidence: God has created our minds such that they contain these same ideas. That is, God has given us our ideas (or, rather, the “germs” of the ideas) so that “they can and must agree with the world” (1860a, 359). God intends that we can have knowledge of the physical world, and this is possible only through the use of ideas which resemble those that were used in creating the world. Hence with our ideas—once they are properly “unfolded” and explicated—we can colligate correctly the facts of the world and form true theories. And when these ideas are distinct, we can know a priori the axioms which express their meaning.
An interesting consequence of this interpretation of Whewell's view of necessity is that every law of nature is a necessary truth, in virtue of following analytically from some idea used by God in creating the world. Whewell drew no distinction between truths which can be idealized and those which cannot; thus, potentially, any empirical truth can be seen to be a necessary truth, once the ideas and conceptions are explicated sufficiently. For example, Whewell suggests that experiential truths such as “salt is soluble” may be necessary truths, even if we do not recognize this necessity (i.e., even if it is not yet knowable a priori) (1860b, 483). Whewell's view thus destroys the line traditionally drawn between laws of nature and the axiomatic propositions of the pure sciences of mathematics; mathematical truth is granted no special status.
In this way Whewell suggested a view of scientific understanding which is, perhaps not surprisingly, grounded in his conception of natural theology. Since our ideas are “shadows” of the Divine Ideas, to see a law as a necessary consequence of our ideas is to see it as a consequence of the Divine Ideas exemplified in the world. Understanding involves seeing a law as being not an arbitrary “accident on the cosmic scale,” but as a necessary consequence of the ideas God used in creating the universe. Hence the more we idealize the facts, the more difficult it will be to deny God's existence. We will come to see more and more truths as the intelligible result of intentional design. This view is related to the claim Whewell had earlier made in his Bridgewater Treatise (1833), that the more we study the laws of nature the more convinced we will be in the existence of a Divine Law-giver. (For more on Whewell's notion of necessity, see Snyder 1994 and 2006, chapter one.)
An issue of interest to philosophers of science today is the relation between knowledge of the actual practice and history of science and writing a philosophy of science. Whewell is interesting to examine in relation to this issue because he claimed to be inferring his philosophy of science from his study of the history and practice of science. His large-scale History of the Inductive Sciences (first edition published 1837) was a survey of science from ancient to modern times. He insisted upon completing this work before writing his Philosophy of the Inductive Sciences, founded upon their history. Moreover, Whewell sent proof-sheets of the History to his many scientist-friends to ensure the accuracy of his accounts. Besides knowing about the history of science, Whewell had first-hand knowledge of scientific practice: he was actively involved in science in several important ways. In 1825 he traveled to Berlin and Vienna to study mineralogy and crystallography with Mohs and other acknowledged masters of the field. He published numerous papers in the field, as well as a monograph, and is still credited with making important contributions to giving a mathematical foundation to crystallography. He also made contributions to the science of tidal research, pushing for a large-scale world-wide project of tidal observations; he won a Royal Society gold medal for this accomplishment. (For more on Whewell's contributions to science, see Snyder 2011, Ducheyne 2010a, Ruse 1991, and Becher 1986). Whewell acted as a terminological consultant for Faraday and other scientists, who wrote to him asking for new words. Whewell only provided terminology when he believed he was fully knowledgeable about the science involved. In his section on the “Language of Science” in the Philosophy, Whewell makes this position clear (see 1858b, p. 293). Another interesting aspect of his intercourse with scientists becomes clear in reading his correspondence with them: namely, that Whewell constantly pushed Faraday, Forbes, Lubbock and others to perform certain experiments, make specific observations, and to try to connect their findings in ways of interest to Whewell. In all these ways, Whewell indicated that he had a deep understanding of the activity of science.
So how is this important for his work on the philosophy of science? Some commentators have claimed that Whewell developed an a priori philosophy of science and then shaped his History to conform to his own view (see Stoll 1929 and Strong 1955). It is true that he started out, from his undergraduate days, with the project of reforming the inductive philosophy of Bacon; indeed this early inductivism led him to the view that learning about scientific method must be inductive (i.e., that it requires the study of the history of science). Yet it is clear that he believed his study of the history of science and his own work in science were needed in order to flesh out the details of his inductive position. Thus, as in his epistemology, both a priori and empirical elements combined in the development of his scientific methodology. Ultimately, Whewell criticized Mill's view of induction developed in the System of Logic not because Mill had not inferred it from a study of the history of science, but rather on the grounds that Mill had not been able to find a large number of appropriate examples illustrating the use of his “Methods of Experimental Inquiry.” As Whewell noted, Bacon too had been unable to show that his inductive method had been exemplified throughout the history of science. Thus it appears that what was important to Whewell was not whether a philosophy of science had been, in fact, inferred from a study of the history of science, but rather, whether a philosophy of science was inferable from it. That is, regardless of how a philosopher came to invent her theory, she must be able to show it to be exemplified in the actual scientific practice used throughout history. Whewell believed that he was able to do this for his discoverers' induction.
Whewell's moral philosophy was criticized by Mill as being “intuitionist” (see Mill 1852). Whewell's morality is intuitionist in the sense of claiming that humans possess a faculty (“conscience”) which enables them to discern directly what is morally right or wrong. His view differs from that of earlier philosophers such as Shaftesbury and Hutcheson, who claimed that this faculty is akin to our sense organs and thus spoke of conscience as a “moral sense.” Whewell's position is more similar to that of intuitionists such as Cudworth and Clarke, who claimed that our moral faculty is reason. Whewell maintained that there is no separate moral faculty, but rather that conscience is just “reason exercised on moral subjects.” For this reason, Whewell referred to moral rules as “principles of reason” and described the discovery of these rules as an activity of reason (see 1864, 23–4). These moral rules “are primary principles, and are established in our minds simply by a contemplation of our moral nature and condition; or, what expresses the same thing, by intuition” (1846, 11). Yet, what he meant by “intuition” was not a non-rational mental process, as Mill suggested. On Whewell's view, the contemplation of the moral principles is conceived as a rational process. Whewell noted that “Certain moral principles being, as we've said, thus seen to be true by intuition, under due conditions of reflection and thought, are unfolded into their application by further reflection and thought”(1864, 12–13). Morality requires rules because reason is our distinctive property, and “Reason directs us to Rules” (1864, 45). Whewell's morality, then, does not have one problem associated with the moral sense intuitionists. For the moral sense intuitionist, the process of decision-making is non-rational; just as we feel the rain on our skin by a non-rational process, we just feel what the right action is. This is often considered the major difficulty with the intuitionist view: if the decision is merely a matter of intuition, it seems that there can be no way to settle disputes over how we ought to act. However, Whewell never suggested that decision-making in morality is a non-rational process. On the contrary, he believed that reason leads to common decisions about the right way to act (although our desires/affections may get in the way): he explained “So far as men decide comformably to Reason, they decide alike” (see 1864, 43). Thus the decision on how we ought to act should be made by reason, and so disputes can be settled rationally on Whewell's view.
Mill also criticized Whewell's claim that moral rules are necessary truths which are self-evident. Mill took this to mean that there can be no progress in morality—what is self-evident must always remain so—and thus to the further conclusion that the intuitionist considers the current rules of society to be necessary truths. Such a view would tend to support the status quo, as Mill rightly complained. (Thus he accused Whewell of justifying evil practices such as slavery, forced marriages, and cruelty to animals.) But Mill was wrong to attribute such a view to Whewell. Whewell did claim that moral rules are necessary truths, and invested them with the epistemological status of self-evident “axioms” (see 1864, 58). However, as noted above, Whewell's view of necessary truth is a progressive one. This is as much so in morality as in science. The realm of morality, like the realm of physical science, is structured by certain Fundamental Ideas: Benevolence, Justice, Truth, Purity, and Order (see 1852, xxiii). These moral ideas are conditions of our moral experience; they enable us to perceive actions as being in accordance with the demands of morality. Like the ideas of the physical sciences, the ideas of morality must be explicated before the moral rules can be derived from them (see 1860a, 388). There is a progressive intuition of necessary truth in morality as well as in science. Hence it does not follow that because the moral truths are axiomatic and self-evident that we currently know them (see 1846, 38–9). Indeed, Whewell claimed that “to test self-evidence by the casual opinion of individual men, is a self-contradiction” (1846, 35). Nevertheless, Whewell did believe that we can look to the dictates of positive law of the most morally advanced societies as a starting point in our explication of the moral ideas. But he was not therefore suggesting that these laws are the standard of morality. Just as we examine the phenomena of the physical world in order to explicate our scientific conceptions, we can examine the facts of positive law and the history of moral philosophy in order to explicate our moral conceptions. Only when these conceptions are explicated can we see what axioms or necessary truths of morality truly follow from them. Mill was therefore wrong to interpret Whewell's moral philosophy as a justification of the status quo or as constituting a “vicious circle.” Rather, Whewell's view shares some features of Rawls's later use of the notion of “reflective equilibrium.” (For more on Whewell's moral philosophy, and his debate with Mill over morality, see Snyder 2006, chapter four.)
Whewell's letters and papers, mostly unpublished, are found in the Whewell Collection, Trinity College Library, Cambridge. A selection of letters was published by I. Todhunter in William Whewell, An account of his Writings, Vol. II (London, 1876) and by J. Stair-Douglas in The Life, and Selections from the Correspondence of William Whewell (London, 1882).
During his lifetime Whewell published approximately 150 books, articles, scientific papers, society reports, reviews, and translations. In the list which follows, we mention only his most important philosophical works relevant to the discussion above. More complete bibliographies can be found in Snyder (2006), Yeo (1993) and Fisch and Schaffer (1991).
- (1831) “Review of J. Herschel's Preliminary Discourse on the Study of Natural Philosophy (1830),” Quarterly Review, 90: 374–407.
- (1833) Astronomy and General Physics Considered With Reference to Natural Theology (Bridgewater Treatise), London: William Pickering.
- (1840) The Philosophy of the Inductive Sciences, Founded Upon Their History, in two volumes, London: John W. Parker.
- (1844) “On the Fundamental Antithesis of Philosophy,” Transactions of the Cambridge Philosophical Society, 7(2): 170–81.
- (1845) The Elements of Morality, including Polity, in two volumes, London: John W. Parker.
- (1846) Lectures on Systematic Morality, London: John W. Parker.
- (1847) The Philosophy of the Inductive Sciences, Founded Upon Their History, 2nd edition, in two volumes, London: John W. Parker.
- (1848) “Second Memoir on the Fundamental Antithesis of Philosophy,” Transactions of the Cambridge Philosophical Society, 8(5): 614–20.
- (1849) Of Induction, With Especial Reference to Mr. J. Stuart Mill's System of Logic, London: John W. Parker
- (1850) “Mathematical Exposition of Some Doctrines of Political Economy: Second Memoir,” Transactions of the Cambridge Philosophical Society, 9: 128–49.
- (1852) Lectures on the History of Moral Philosophy, London: John W. Parker.
- (1853) Of the Plurality of Worlds. An Essay, London: John W. Parker.
- (1857) “Spedding's Complete Edition of the Works of Bacon,” Edinburgh Review, 106: 287–322.
- (1857) History of the Inductive Sciences, from the Earliest to the Present Time, 3rd edition, in two volumes, London: John W. Parker.
- (1858a) The History of Scientific Ideas, in two volumes, London: John W. Parker.
- (1858b) Novum Organon Renovatum, London: John W. Parker.
- (1860a) On the Philosophy of Discovery: Chapters Historical and Critical, London: John W. Parker.
- (1860b) “Remarks on a Review of the Philosophy of the Inductive Sciences,” letter to John Herschel, 11 April 1844; published as essay F in 1860a.
- (1861) (ed. and trans.) The Platonic Dialogues for English Readers, London: Macmillan.
- (1862) Six Lectures on Political Economy, Cambridge: The University Press.
- (1864) The Elements of Morality, Including Polity, 4th edition, with Supplement, Cambridge: The University Press.
- (1866) “Comte and Positivism,” Macmillan's Magazine, 13: 353–62.
- Becher, H. (1981) “William Whewell and Cambridge Mathematics,” Historical Studies in the Physical Sciences, 11: 1–48.
- ––– (1986), “Voluntary Science in Nineteenth-Century Cambridge University to the 1850s,” British Journal for the History of Science, 19: 57–87.
- ––– (1991), “Whewell's Odyssey: From Mathematics to Moral Philosophy,” In Menachem Fisch and Simon Schaffer, eds.William Whewell: A Composite Portrait. Cambridge: Cambridge University Press, pp. 1–29.
- Brewster, D. (1842), “Whewell's Philosophy of the Inductive Sciences,” Edinburgh Review, 74: 139–61.
- Brooke, J.H. (1977), “Natural Theology and the Plurality of Worlds: Observations on the Brewster-Whewell Debate,” Annals of Science, 34: 221–86.
- Buchdahl, G. (1991), “Deductivist versus Inductivist Approaches in the Philosophy of Science as Illustrated by Some Controversies Between Whewell and Mill,” in Fisch and Schaffer (eds.) 1991, pp. 311–44.
- Butts, R.(1973), “Whewell's Logic of Induction” in R.N. Giere and R.S.Westfall, (eds.) Foundations of Scientific Method, Bloomington: Indiana University Press, pp. 53-85.
- ––– (1987), “Pragmatism in Theories of Induction in the Victorian Era: Herschel, Whewell, Mach and Mill” in H. Stachowiak (ed.), Pragmatik: Handbuch Pragmatischen Denkens,Hamburg: F. Meiner, pp. 40-58.
- Cannon, W. F. (1964), “William Whewell: Contributions to Science and Learning,” Notes and Records of the Royal Society, 19: 176–91.
- Donagan, A. (1992), “Sidgwick and Whewellian Intuitionism: Some Enigmas,” in B. Schultz (ed.) 1992, pp. 123–42.
- Ducheyne, S. (2010a), “Whewell's Tidal Researches: Scientific Practice and Philosophical Methodology,” Studies in History and Philosophy of Science (Part A), 41: 26–40.
- ––– (2010b), “Fundamental Questions and Some New Answers on Philosophical, Contextual, and Scientific Whewell,” Perspectives on Science, 18: 242–72.
- Fisch, M. (1985), “Necessary and Contingent Truth in William Whewell's Antithetical Theory of Knowledge,” Studies in History and Philosophy of Science, 16: 275–314.
- ––– (1985), “Whewell's Consilience of Inductions: An Evaluation,” Philosophy of Science, 52: 239–55.
- ––– (1991), William Whewell, Philosopher of Science, Oxford: Oxford University Press.
- Fisch, M. and S. Schaffer (eds.) (1991), William Whewell: A Composite Portrait, Oxford: Oxford University Press.
- Harper, W. (1989), “Consilience and Natural Kind Reasoning,” in J.R. Brown and J. Mittelstrass (eds.), An Intimate Relation, Dordrecht: D. Reidel, pp. 115–52.
- Herschel, J. (1841), “Whewell on Inductive Sciences,” Quarterly Review, 68: 177–238.
- Hesse, M.B. (1968), “Consilience of Inductions,” in Imre Lakatos (ed.), The Problem of Inductive Logic, Amsterdam: North Holland Publication Co., pp. 232–47.
- ––– (1971), “Whewell's Consilience of Inductions and Predictions [Reply to Laudan],” Monist, 55: 520–24.
- Hutton, R.H. (1850), “Mill and Whewell on the Logic of Induction,” The Prospective Review, 6: 77–111.
- Laudan, L. (1971), “William Whewell on the Consilience of Inductions,” Monist, 55: 368–91.
- ––– (1980), “Why was the Logic of Discovery Abandoned?” in T. Nickles (ed.), Scientific Discovery, Logic, and Rationality, Dordrecht: D. Reidel, pp. 173–183.
- Losee, J. (1983), “Whewell and Mill on the Relation between Science and Philosophy of Science,” Studies in History and Philosophy of Science, 14: 113–26.
- Lugg, A. (1989), “History, Discovery and Induction: Whewell on Kepler on the Orbit of Mars,” in J.R Brown and J. Mittelstrass (eds.), An Intimate Relation, Dordrecht: D. Reidel, pp. 283–98.
- Mill, J.S. (1836), “Dr. Whewell on Moral Philosophy,” Westminster Review, 58: 349–85.
- Morrison, M. (1990), “Unification, Realism and Inference,” British Journal for the Philosophy of Science, 41: 305–332.
- ––– (1997), “Whewell on the Ultimate Problem of Philosophy,” Studies in History and Philosophy of Science, 28: 417–437.
- Niiniluoto, I. (1977), “Notes on Popper as a Follower of Whewell and Peirce,” Ajatus, 37: 272–327.
- Peirce, C.S. (1865 ), “Lecture on the Theories of Whewell, Mill and Comte,” in M. Fisch (ed.), Writings of Charles S. Peirce: Chronological Edition, Bloomington IN: Indiana University Press, pp. 205–23.
- ––– (1869 ), “Whewell,” in Max H. Fisch (ed.), Writings of Charles S. Peirce: A Chronological Edition (Volume 2), Bloomington, IN: Indiana University Press, pp. 337–45.
- Ruse, M. (1975), “Darwin's Debt to Philosophy: An Examination of the Influence of the Philosophical Ideas of John F.W. Herschel and William Whewell on the Development of Charles Darwin's Theory of Evolution,” Studies in History and Philosophy of Science, 6: 159–81.
- ––– (1976), “The Scientific Methodology of William Whewell,” Centaurus, 20: 227–57.
- ––– (1991), “William Whewell: Omniscientist,” in M. Fisch and S. Schaffer (eds.) 1991, pp. 87–116.
- Schultz, B. (ed.) (1992), Essays on Henry Sidgwick, Cambridge: Cambridge University Press.
- Singer, M. (1992), “Sidgwick and 19th century Ethical Thought,” in B. Schultz (ed.), Essays on Henry Sidgwick, Cambridge: Cambridge University Press, pp. 65–91.
- Snyder, L.J. (1994), “It's All Necessarily So: William Whewell on Scientific Truth,” Studies in History and Philosophy of Science, 25: 785–807.
- ––– (1997a), “Discoverers' Induction,” Philosophy of Science, 64: 580–604.
- ––– (1997b), “The Mill-Whewell Debate: Much Ado About Induction,” Perspectives on Science, 5: 159–198.
- ––– (1999), “Renovating the Novum Organum: Bacon, Whewell and Induction,” Studies in History and Philosophy of Science, 30: 531–557.
- ––– (2005), “Confirmation for a Modest Realism,” Philosophy of Science, 72: 839–49.
- ––– (2006), Reforming Philosophy: A Victorian Debate on Science and Society, Chicago: University of Chicago Press.
- ––– (2008), “The Whole Box of Tools: William Whewell and the Logic of Induction,” in John Woods and Dov Gabbay (eds.), The Handbook of the History of Logic (Volume VIII), Dordrecht: Kluwer, pp. 165–230.
- ––– (2011), The Philosophical Breakfast Club: Four Remarkable Men who Transformed Science and Changed the World, New York: Broadway Books.
- Strong, E.W. (1955), “William Whewell and John Stuart Mill: Their Controversy over Scientific Knowledge,” Journal of the History of Ideas, 16: 209–31.
- Wilson, D.B. (1974), “Herschel and Whewell's Versions of Newtonianism,” Journal of the History of Ideas, 35: 79–97.
- Yeo, R. (1993), Defining Science: William Whewell, Natural Knowledge, and Public Debate in Early Victorian Britain, Cambridge: Cambridge University Press.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. |
Serial Programming/Typical RS232 Hardware Configuration
This page lines out a typical serial RS232-hardware configuration as it can be found in computers and devices of all kinds. It should serve as a guideline and introduction to the direct programming of some serial hardware.
The provided information is generic in nature, since the actual serial hardware which can be found in some device or computer do vary. Often these days the hardware is integrated in one single chip, even with many other functions like a parallel port, unrelated to serial communications. Nevertheless this module provides a good overview about which logical or physical components do what, and where the programming of serial hardware actually happens.
The following figure provides an overview of the involved components, and how they are in principle connected:
RS232 +-----------+ +-----------+ +-----------+ +-----------+ Interface | Line | | | | Interface | | | -----------+ Driver / +---+ UART +---+ Logic +---+ CPU | | Receiver | | | | | | | +-----------+ +-----+-----+ +-----+-----+ +-----------+ | | | | +-----+-----+ | | Baud Rate | | | Generator +---------+ | | +-----------+
We will discuss the components and their purpose in turn.
Line Driver / Receiver
RS-232 communication uses voltage up to ±15V (some early specs even use ±25V). And it uses an inverse logic (high/true/1 is a negative voltage, low/false/0 is a positive voltage). These voltages are far too high for modern (and even older) computer logic. Therefore, RS-232 interfaces typically contain special hardware, a so-called line driver and a line receiver.
The line driver is responsible for converting typical computer voltage logic to the high voltages used on the RS-232 line, and for inverting the logic. This is used when the computer hardware transmits serial data. The voltage output is typically continuous short-circuit safe, as this is required by the RS-232 standard.
The line receiver is responsible for the inverse operation of the line driver. It converts incoming RS-232 signals to voltages safe for computer logic, and it of course also inverts the logic.
Actually, there is usually more than one line driver and receiver in use, because each incoming RS-232 signal in a RS-232 cable needs its own receiver, and each outgoing RS-232 signal needs its own driver. However, typically multiple drivers / receivers are combined in one chip. The chip as such also protects the computer logic from spikes which could happen on the serial wire. Some line driver / receiver chips even generate the necessary RS-232 voltage on-chip, while others need external power sources for these voltages.
Line drivers / receivers are typically not programmable. They are hard-wired into the logic and as such of no great concern for the programmer.
- The module MAX232 Driver/Receiver Family for an example of a popular driver / receiver in amateur electronics.
- The module RS-232 Wiring and Connections
The UART (universal asynchronous receiver transmitter) is the heart of the serial hardware. It is a chip or part of a chip with the purpose to convert between parallel data and serial data. RS-232 UARTs also typically add the necessary start/stop and parity bits when transmitting, and decode this information when receiving.
A UART typically operates entirely on computer logic voltage. Its serial data input/output voltage is the computer logic voltage, not the serial line voltage. They leave the actual line interface to a particular line driver / receiver. This line driver / receiver does not necessarily need to be an RS-232 line driver / receiver, but could e.g. also be an RS-422 differential driver / receiver. This, and the fact that baud rate, parity, number of stop bits, number of data bits are programmable is the reason why UARTs are called universal. The distinction between UART and line driver / receiver blurs if they are both placed in the same chip. Such chips are typically also sold under the label 'UART'.
UARTs are called asynchronous, because they don't use a special clock signal to synchronize with the remote side. Instead, they use the start/stop bits to identify the data bits in the serial stream.
Thanks to the UART the rest of the hardware, as well as the software application can deal with normal bytes to hold the communication data. It is the job of the UART to chop a byte into a series of serial bits when sending, and to assemble series of bits into a byte when receiving. UARTs typically contain eight bit wide receiver and transmission buffers. Of which not all bits might be used if e.g. a 7 bit transmission is used. Received serial data is provided in parallel in the receiver buffer, to-be-send data is written in parallel to the transmission buffer. Depending on the UART the buffers might just have a depth of one byte, or a few bytes (in the range of 15 or 16 bytes). The less deep the buffers are, the more precise the communication with the CPU needs to be. E.g. if the receiver buffer just has a depth of one byte, and the data is not fetched fast enough, the next received data can overwrite the previously received data in the buffer, and the previously received data is lost.
Because of the fact that the timing on the serial interface is important, UARTs are typically connected to a baud rate generator, either an internal one in the UART chip, or an external one.
A USART (universal synchronous/asynchronous receiver transmitter) is a version of an UART that can also communicate synchronously. Many modern ICs have this feature. However, as RS232 works asynchronously, the synchronous features of a USART aren't used for serial communication. Depending on the particular type of USART the synchronous features are either turned off at start or need to be turned off. Once they are off a USART is a UART when it comes to RS232 communication.
Baud Rate Generator
Principle of Operation
The baud rate generator is an oscillator. It provides a frequency signal which is used to control the timing on the serial interface. Since different line speeds need a different timing, the baud rate generation needs to be flexible.
There are two general ways to achieve a flexible baud rate generation. Either the baud rate generator itself is programmable and can produce the necessary different frequencies, or the UART has a programmable divider or multiplier, which converts the frequency from the baud rate generator into the required frequencies. Observers might have noticed that there are fixed relation between typical baud rates (300bps, 600bps, 1200bps, 2400bps, 9600bps (4 x 2400bps), etc.). This simplifies the usage of frequency dividers or multipliers to generate the necessary timing.
Depending on the actual UART, the baud rate generator either needs to be some external component, or it is directly integrated into the UART chip. From the outside, the programmatic change of the baud rate generation is the means to control the speed of the serial connection. Often when programming the baud rate one doesn't provide the desired baud rate in 'clear text', but needs to provide some divider or factor. Providing the right divider or factor requires knowledge of the basic frequency of the used baud rate generator.
Oscillator & Magic Quartz Crystal Values
RS-232 communication is asynchronous. So there is no explicit means, no shared clock signal, how a transmitter and receiver synchronize. Instead they synchronize on the start bit and further assume a certain bit length. The bit length is a direct function of the baud rate. E.g. 9600 bauds on RS-232 means 9600 bits are transmitted in one second. Each bit therefore has a length of 1/9600 seconds. To ensure the transmitter and receiver assume the same bit length, they need to work with the same baud rate. And the baud rates on both sides of the communication need to be precise up to a certain level. Otherwise the data transmission fails. Typical errors when the baud rates on the two sides aren't well enough aligned are framing errors, wrongly received or missed bits.
The baud rate generator, as it generates the clock frequency for sending, as well as receiving data, is responsible for providing the required baud rate precision. As a rule of thumb, typically, the total difference in the baud rate should not exceed +/- 4% in the worst case. If one distributes this "error budget" equally to both sides it means the baud rate on each side needs to be kept within +/- 2% of the nominal value. In fact, one should avoid this worst case and keep the baud rate at least within +/- 1% on each side.
It is common to use a crystal oscillator to build a baud rate generator with an initial precision, temperature drift and long term drift suitable for RS-232. That is, the baud rate generator is based on an oscillator that in turn uses a quartz crystal as a reference. Simpler oscillators, like RC oscillators are usually not good enough.
Typically all the crystal oscillator electronics is build into the baud rate generator IC. If a UART with an integrated baud rate generator is used, that UART typically contains the electronics. The only thing that typically needs to be added is the quartz crystal.
To achieve the common baud rates, the frequency value of the quartz crystal needs to have one of certain "magic" values. These values at first glance seem rather odd. However, they are typically multiplies of 300 (often the lowest baud rate to support) or 25 (for very low baud rates of 25, 50, 75, 150) and a power of two. Powers of two are used, because it is easy to divide a frequency by a power of two, thus making a configurable or programmable baud rate generator possible.
For example, dividing the rather odd crystal frequency 4.915200 MHz by 256 (a power of two), gives the nice value 19200 Hz. Crystals with frequencies like 4.915200 MHz are called baud rate crystals or magic crystals. The following table lists common baud rate crystal frequencies.
- 1.8432 MHz, 3.6864 MHz, 4.9152 MHz, 5.5296 MHz, 6.1440 MHz, 7.3728 MHz, 9.8304 MHz
- 11.0592 MHz, 12.2880 MHz, 12.9024 MHz, 14.7456 MHz, 16.5888 MHz, 18.4320 MHz, 19.6608 MHz
- 20.2752 MHz, 22.1184 MHz, 23.9616 MHz, 25.8048 MHz, 27.6480 MHz , 29.4912 MHz
- 31.3344 MHz, 33.1776 MHz, 35.0208 MHz, 36.8640 MHz
Which crystal frequency to use depends on the baud rate generator / UART in use. The datasheet of the baud rate generator / UART should be consulted for details.
The UART and the baud rate generator are just chips or can be components of a larger chip. For simplifying the discussion we assume they are one chip. This chip typically has a bunch of pins which somehow need to be written to or read by the CPU. As such, there needs to be some way for the CPU to address the chip to talk to it. There also needs to be some way for the chip to make itself heard in certain situations.
It is the job of the interface logic to provide this connection. The logic varies considerably from one system architecture to another. One common technique used is to map the UARTs transmit and receive buffers in the CPU's memory address space. That way they look like ordinary memory to the CPU. Another way is to connect these buffers to some CPU-specific I/O port. Along with the transmit and receive buffers, the CPU must also have access to control lines, baud rate, parity, and character size programming interface of the UART.
Since a UART can't buffer an endless amount of incoming data (some can only buffer one byte in the receiver buffer), the CPU (or some other hardware) needs to read from the UART 'fast enough'. The CPU can either continuously poll the UART to check if new data is available, or the UART can notify when new data is ready. This notification is typically done via a CPU interrupt. It is the job of the interface logic to trigger such an interrupt on behalf of the UART. Both mechanisms (polling- or interrupt-driven) have advantages and disadvantages.
In the other direction, when sending data, there is a similar problem. The CPU needs to deliver the data 'fast enough' to prevent a buffer underrun. This is not fatal, but can degrade the performance and efficiency of the serial communication. The CPU can also deliver the data 'too fast' to the UART causing an overrun of the transmitter buffer in the UART. This can again be controlled by polling (the CPU checks if the UART is ready to send) or via an interrupt (the UART tells the CPU it needs more data). Again, the interface logic has to provide the means to accomplish either or both of these operating modes.
The interface logic determines how and where a programmer finds the UART he/she wants to program, and in which operating mode(s) (polling, interrupt driven) he/she can transmit and receive data. The interface logic in, for example, modern PCs is programmable to some extent. However, this is typically done by the BIOS (setting I/O addresses and interrupt numbers). In older PCs and many other computers the interface logic is not programmable. From an application programmer's point of view there is very seldom a need to program the interface logic. Typically it is treated as hard-wired logic.
In order to program a particular hardware one needs to know:
- How to reach the UART and baud rate generator (how the interface logic works). Typically this requires knowledge of some I/O or memory addresses.
- How the communication with the UART and baud rate generator works (does the interface logic provide polling or interrupts, or both?).
- How to program the UART. Having the UARTs data sheet available is very helpful for this.
- How to program the baud rate generator, which just might be part of the UART chip.
Alternatively, one uses the serial programming API of an operating system, which is supposed to know all this, and 'do the right thing'.
- The page Programming the 8250 UART provides details for programming a particular UART in a particular hardware combination. |
The Tropospheric Seasonally Varying Mean Climate over the Western
The seasonally varying mean circulation features are intimately linked to horizontal gradients in temperature, which arise from differential heating as a function of both latitude and the underlying surface characteristics. The tropics are heated strongly throughout the year, while middle and high latitude regions experience considerable variation in heating from summer to winter. As a result, the mean meridional temperature gradient and, consequently, the strength of the tropospheric zonal wind (westerlies) vary considerably from summer to winter (compare Figs. 1 and 2). The strongest meridional temperature gradients and strongest westerlies are observed in the middle latitudes of the winter hemisphere. The Southern Hemisphere winter jet stream (maximum in the westerlies in July, Fig. 2) is closer to the equator than the corresponding Northern Hemisphere jet stream (January, Fig. 1). The greatest anticyclonic shear for either hemisphere is found on the equatorward flank of the SH winter jet stream, due to the greater intensity and more equatorward position of the SH jet stream and to the presence of equatorial easterlies throughout the troposphere at that time of the year. Strong sinking motion is observed on the equatorward flank of the winter jet streams, with the strongest direct (Hadley) circulation occurring during the southern winter (Fig. 2).
Zonal asymmetries in the atmospheric circulation in the western hemisphere arise primarily due to the difference in thermal capacity between land and water. Continental areas are often warmer during the summer and cooler in the winter when compared to neighboring oceanic regions. This results in a seasonally varying zonal temperature gradient in the vicinity of the east and west coasts of continents, and a seasonal variation in the intensity of the mean meridional wind (Figs. 3 and 4). The troposphere over the southwestern United States (hereafter US) is heated strongly during the boreal summer season (June-September), which contributes to a pattern of zonal temperature gradients (Fig. 3, top) that favors southerly (northerly) upper-tropospheric winds over the West Coast (eastern US) (Fig. 3, bottom). This pattern is reversed during the winter (November-February). Thus, the mean upper-tropospheric circulation over the Southwest US is monsoon-like being anticyclonic during the summer and cyclonic during the winter. This reversal in the circulation between summer and winter and its close association to the annual cycle in heating over the continent have been discussed previously (e.g., Kousky and Srivatsangam 1983).
Similarly, the southern Amazon Basin and Bolivian altiplano are heated strongly during the austral warm season (October-March) resulting in an enhanced tropospheric zonal temperature gradient and enhanced upper-tropospheric meridional flow in the vicinity of both coasts of South America (Fig. 4). The strengths of the zonal temperature gradient and meridional flow near the west and east coasts of South America are measures of the intensity of the Bolivian anticyclone, which dominates the upper-tropospheric flow over the region during the austral summer. Unlike the pattern over the US, the flow over South America does not show a reversal in circulation from summer to winter. Instead, the winter flow in the upper troposphere is nearly zonal (near zero meridional component, Fig. 4, bottom), which is probably due to the fact that most of the South American land mass is located in the tropics and subtropics, and not in the middle and high latitudes as is the case for North America.
In both hemispheres the summer monsoon-like upper-tropospheric continental anticyclones are accompanied by oceanic troughs (e.g., Figs. 33 and 35). These troughs, located on the equatorward flanks of the subtropical surface highs, are cold core, and feature sinking vertical motion and a lack of deep convective clouds. |
On 28 October 2012 the Haida Gwaii earthquake (magnitude 7.8) struck off the coast of British Columbia. Tsunami warnings triggered coastal evacuations in Canada, the northern United States, and Hawaii, but no deaths or significant structural damage occurred. Those tsunami warnings were generated by traditional methods that rely on a fault model of the tsunami source. Now Gusman et al. suggest that source data may not be necessary for real-time tsunami forecasts.
The researchers compiled Haida Gwaii tsunami data from dozens of ocean bottom seismometers, equipped with pressure gauges, that had been installed offshore of Oregon and California. Spaced 10 to 50 kilometers apart, these gauges directly sense a passing tsunami wave, providing much denser data than are normally collected for tsunamis. The authors used these data to compare two different forecasting methods.
In the first method, the scientists used pressure gauge data and a fault model to estimate the locations of fault slip that caused the tsunami. This allowed them to calculate sea surface displacement and simulate tsunami size and timing. The second method ignored the earthquake source. Instead, pressure gauge data continuously fed and refined a wave field model that predicted the tsunami’s movement.
When the researchers compared the simulations with observations of the Haida Gwaii tsunami, they found that both methods accurately forecast tsunami timing and amplitude, with a warning time of 30 minutes or more. The success of the second method suggests that with a dense enough sensor array, real-time tsunami predictions do not require a fault model. (Geophysical Research Letters, doi:10.1002/2016GL068368, 2016)
—Sarah Stanley, Freelance Writer
Citation: Stanley, S. (2016), Streamlining rapid tsunami forecasting, Eos,97, doi:10.1029/2016EO052675. Published on 23 May 2016. |
Shintō Timeline (text)
Ancient Origins of Shintō
Because Shintō is an indigenous cultural tradition emerging from ancient Japan, its origins cannot be clearly specified. However, there is archaeological evidence of belief in kami tracing back to the Yayoi period, 300 BCE - 300 CE. Though kami worship and shrine Shintō became systematized around the late 6th century CE, they were based on traditional forms that long preceded systematization. These ancient Shintō forms, rituals, and beliefs were not formalized or centralized, but were highly local, diverse, and community-based.
4 BCE The Ise Grand Shrine
The Ise Grand Shrine is one of the oldest, most important, and holiest sites in Shintō and Japanese history, and was first established in 4 BCE. It is located in Ise, Mie Prefecture of Japan, and is dedicated to the sun goddess Amaterasu.
3 BCE The Tsubaki Grand Shrine
The Tsubaki Grand Shrine, a Shintō shrine in Suzuka, Mie Prefecture, Japan was founded in 3 BCE. It is one of the oldest shrines in Japan, and the principal shrine of the deity Sarutahiko-no-Ōkami.
Ca. 456-459 CE Kami Rituals in Clan Communities
Clans with specializations that relate to kami ritual first appear. Many members of clan communities practiced sacred dancing, divination, and spirit possession.
6th century CE Buddhism and Shintō Meet
At the latter part of the Kofun period (300-538 CE), Buddhism entered Japan from Korea and began to interact with Shintō. Buddhism greatly influenced Shintō, and they syncretized in significant ways -- for example, kami were integrated into Buddhist cosmology. The introduction of Buddhism to Japan in the 6th century also led to the development of the term “Shintō” as a way to refer to the particular practices associated with the kami.
645-710 CE The Hakuhō Period
During the Hakuhō period, Shintō was established as the imperial faith of Japan, ending the previous predominance of clan Shintō. As the notion emerged that the Emperor and court had religious obligations, state leaders began to carry out court rites ensuring that kami would protect Japan. The Ise shrine became the main imperial shrine.
Because of the increasing influence of mainland Asian thought in the late 7th century CE, Japanese religion and laws began to be codified. One example of this is the Taiho Code (701 CE), or Ritsuryō, which codified Shintō rituals and required registration of all priests, monks, nuns, and temples.
8th Century CE Kojiki and Nihon Shoki
The Kojiki (712 CE) and Nihon Shoki (720 CE), the oldest classical Japanese texts and among the earliest written sources on kami worship, were written in the early Nara period (710-749 CE). These texts are compilations of Japanese mythology that comprise the basis of Shintō. They were intended to integrate Taoist, Confucian, and Buddhist themes into Shintō and to bolster the Imperial house. Throughout this same period, Shintō and Buddhism became deeply intertwined in Japanese culture and society.
759 CE The Manyoshu
The Manyoshu or Collection of 10,000 Leaves was written in 759. It is a classic of Japanese poetry and an important source in the Shintō tradition.
845 CE - 903 CE The Life of Tenjin
Sugawara no Michizane, also known as Kan Shōjō or Kanke, was a scholar, court official, and poet who lived from 845-903 CE. Today, he is known as Tenjin and revered as a deity of learning in Shintō.
1168 CE Rebuilding of the Itsukushima Shrine
The Itsukushima Shrine is one of the most iconic Shintō sites in Japan, and is said to have been first established in 593 CE under Empress Suiko. In 1168 CE, Taira no Kiyomori, a military leader, rebuilt the shrine in a new architectural style that is its present form.
1275 CE Imperial Court Elevates Kami
The imperial court elevates all kami by one rank in honor of their success in warding off Mongol invasion.
17th-18th centuries CE Shintō in the Early Modern Period
Throughout the early modern period, Japanese civil religion was characterized by heavy Confucian influences, but popular religion remained a blend of Shintō and Buddhist practices and beliefs. Shintō began to develop a stronger intellectual tradition during this time.
1838 CE Founding of Tenrikyo
Tenrikyo, or “Religion of Heavenly Truth,” is a Japanese new religion with a strong Shintō orientation. It was founded in 1838 by female religious leader Nakayama Miki (1798-1887) as a result of a series of revelations she received.
1859 CE Founding of Konkokyo
Bunji Kawate (1814-1883), also known as Konko Daijin, founded the Japanese new religion Konkokyo in 1859 CE. Konkokyo centers on belief in and worship of a single all-sustaining spirit that flows through all things. Later after Konko Daijin’s death, Konkokyo was classified as one of the thirteen sects of Shintō.
1868 CE State Shintō
State Shintō emerged in 1868 when the Meiji monarchy established Shintō as the foundation of the modern state of Japan. This involved largely unsuccessful attempts to disentangle “real” Shintō from other influences, especially Buddhism, and to undermine the prominence of Buddhism in Japan. Shrines were brought under administrative control of the Department of Divinities (Jingikan).
A few years later, the Meiji government designated 13 new religious movements as forms of “Sect Shintō.” The initial Meiji practice of sponsoring shrines declined, but the state continued to use Shintō mythologies in its nationalism and in legitimizing the Emperor’s power. State Shintō was ultimately abandoned after World War II.
1946 CE Shintō after World War II
In the wake of the second World War, State Shintō was abandoned by the Japanese government. However, Shintō remained an integral part of Japanese life and society. In 1946, many Japanese shrines organized themselves into the Association of Shintō Shrines as a way to foster coordination and cooperation. By the late 1990s, approximately 80% of shrines in Japan belonged to the Association.
1969 CE Shintō Groups in International Interfaith Organizations
As early as the 1930s, there was frequent contact between Shintō priests at the Tsubaki Grand Shrine and Unitarians, both Japanese and American. This relationship later led to Japanese Shintō groups joining the International Association for Religious Freedom and attending the 1969 IARF Congress in Boston, MA.
1986 CE The Tsubaki Grand Shrine of America
The Tsubaki Grand Shrine of America was built in 1986 in Stockton, CA, and in 2001 moved to its current location in Granite Falls, Washington state. It was the first Shintō shrine built in the mainland United States after World War II. Tsubaki Grand Shrine of America is a branch of Tsubaki Ōkami Yashiro, one of the oldest shrines in Japan. The current Guji, or Head Priest, is Rev. Koichi Barrish, who is the first American priest in Shintō history.
Today, aspects of Shintō have been integrated with various traditions and new religious movements in Japan. Shintō is currently the most popular religion in Japan, with around 80,000 public Shintō shrines existing in Japan today. Shintō is also practiced in many different parts of the world, alongside other traditions and religious practices. In the United States, though there are a relatively small number of practitioners, there are several Shintō shrines.
Selected Publications & Links
Though there are significant Chinese, Japanese, Vietnamese, and Korean immigrant communities in Greater Boston, East Asian traditions such as Confucianism, Daoism, and Shintō are difficult to survey as there are very few religious centers. These traditions are deeply imbedded in the unique history, geography, and culture of their native countries and are often practiced in forms that are not limited to institutional or communal settings. |
For the first time, scientists have pinpointed neurons in the human brain that respond to singing, but not to other types of music. The neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to regular speech or instrumental music.
The new study, published in Current Biology, builds on previous research by the same team that found a population of neurons in the brain that responds to music in general using functional magnetic resonance imaging (fMRI), a technique that measures blood flow in the brain as a representation of neural activity.
“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” says senior author Nancy Kanwisher, professor of cognitive neuroscience at MIT. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”
Now, taking it a step further, the team has collected recordings of electrical activity taken at the surface of the brain, which is much more precise but has one problem: it must be carried out intracranially.
The higher-resolution data was obtained using electrocorticography (ECoG), where the electrical activity of the brain is recorded by electrodes placed inside the skull. ECoG cannot typically be performed on humans because it is such an invasive procedure, though it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures.
Some patients, already monitored for several days to determine the origin of their seizures before operating, also agree to have their brain activity measured while performing certain tasks for scientific research. In this case, data from 15 participants listening to a collection of 165 sounds – the same sounds used in the earlier fMRI study – were collected over the course of several years.
“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music,” explains Sam Norman-Haignere, a former MIT researcher and now assistant professor of neuroscience at the University of Rochester Medical Center in the U.S.
“At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart.”
Using a new statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data recorded by each electrode.
“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” says Norman-Haignere. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”
In the second part of the study, the researchers developed a new mathematical method for combining the data from ECoG and fMRI. Because fMRI can cover a much larger area of the brain, this allowed the researchers to determine precisely where the neural populations that respond to singing are located.
This spot is at the top of the temporal lobe – near language and music regions of the brain –suggesting they might be responding to song features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing.
In the future, the team hopes to learn more about which aspects of singing cause the response of these neurons.
Originally published by Cosmos as Brain pleasers: the neurons that respond to singing
Imma Perfetto is a science journalist at Cosmos. She has a Bachelor of Science with Honours in Science Communication from the University of Adelaide.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today. |
One of the goals of Open Astronomy Schools is to assemble a large amount of high quality educational activities and resources to use in Astronomy education. Everyone is invited to submit a resource or activity! Below is a small list of selected resources that can help you to propose your training session.
- Stellarium – This free open source planetarium allows the user to explore cosmic light from a realistic sky in the comfort of your own computer, just like the one we see with the naked eye, binoculars or a telescope.
- Celestia – This free space simulation lets the user explore our universe in three dimensions, travel throughout the solar system, and even beyond our galaxy. Experience at close the objects from which cosmic light one can only have a glimpse of back here on Earth.
- Mitaka – Mitaka is a software for visualizing the known Universe with up-to-date observational data and theoretical models, developed by the Four-Dimensional Digital Universe (4D2U) project of the National Astronomical Observatory of Japan (NAOJ). Mitaka users can seamlessly navigate through space, from the Earth to the edges of the known Universe.
- WorldWide Telescope – This is a rich visualization environment that functions as a virtual telescope, bringing together cosmic light imagery from the best ground- and space-based telescopes to enable seamless, guided explorations of the universe.
- Salsa J – This free, student-friendly software allows students to display, analyse, and explore the cosmic light from real astronomical images and other data in the same way that professional astronomers do, making the same kind of discoveries that lead to true excitement about science.
- Extrasolar Planets Lab – This virtual lab introduces the search for planets outside of our solar system using the Doppler and transit methods. It includes simulations of the observed radial velocities of singular planetary systems and introduces the concept of noise and detection enabling to explore the light that reaches us from extra solar planets.
- Astronomy Textbook – Astronomy is a free, introductory, open-source textbook, designed to meet the scope and sequence requirements of one- or two-semester introductory astronomy courses. The book is accompanied by an Open Education Resource Hub with free ancillary materials.
- ESA’s Fleet Across the Spectrum – With this poster you’ll be able to explain and discover with your students ESA’s main astrophysics missions and their observational coverage across the electromagnetic spectrum. |
Commonly Asked Questions about Addiction and Treatment
What are the symptoms of alcoholism?
Alcoholism, also known as Alcohol Use Disorder (AUD), is a chronic condition characterized by an inability to control alcohol consumption despite adverse consequences. The symptoms of alcoholism can vary among individuals but typically include a combination of physical, psychological, and behavioral signs. Some common symptoms include:
- Increased tolerance: A need for increasing amounts of alcohol to achieve the same desired effect, or experiencing diminished effects with continued use of the same amount.
- Withdrawal symptoms: Experiencing physical and psychological symptoms when not drinking, such as tremors, sweating, nausea, anxiety, irritability, or insomnia.
- Loss of control: An inability to limit alcohol consumption, often drinking more or for a longer period than intended.
- Neglect of responsibilities: Failing to fulfill work, school, or family obligations due to alcohol use.
- Social isolation: Withdrawing from social activities or hobbies once enjoyed, in favor of drinking.
- Continued use despite consequences: Continuing to consume alcohol despite negative consequences, such as relationship problems, health issues, or legal troubles.
- Cravings: Experiencing strong urges or cravings to drink alcohol.
- Unsuccessful attempts to quit: Repeated attempts to cut down or quit drinking, without success.
- Risky behavior: Engaging in risky behaviors while under the influence of alcohol, such as driving, operating machinery, or engaging in unprotected sex.
- Time spent on alcohol: Spending a significant amount of time obtaining, consuming, or recovering from the effects of alcohol.
- Physical dependence: Developing a physiological reliance on alcohol, leading to withdrawal symptoms when alcohol consumption is reduced or stopped.
- Neglect of self-care: Neglecting personal hygiene, nutrition, or overall well-being as a result of alcohol use.
How to protect children in a substance abusing family?
"Protecting children in a substance-abusing family can be a significant challenge. Here are several steps that can be taken to ensure the safety and well-being of children in such circumstances:
Recognize the Problem: The first step in protecting children is acknowledging the issue. Denying the existence of substance abuse can lead to further harm.
Prioritize Child's Safety: If the substance abuse is causing dangerous situations, the child's safety must come first. This might mean making difficult decisions, such as temporary separation from the substance-abusing family member.
Seek Professional Help: Reach out to professionals who can guide you through this situation. Social workers, psychologists, and addiction specialists can provide valuable assistance and resources.
Encourage and Support Treatment: If the person with the addiction is willing, encourage them to seek professional help. Therapy, rehab, and support groups can all be beneficial.
Educate the Child: Age-appropriate education about drug and alcohol abuse can be helpful. This can help them understand it's not their fault and that the substance abuse is a disease.
Provide a Stable Environment: Create an environment that provides as much stability and routine as possible. This can help the child feel more secure amidst the chaos that substance abuse can bring.
Offer Emotional Support: Make sure the child knows they can express their feelings and fears to you. Validating their feelings and offering comfort is crucial.
Seek Support for the Child: Counseling or support groups specifically for children of substance abusers can provide them with tools to cope.
Report Neglect or Abuse: If the substance abuse leads to neglect or abuse, it must be reported to local child protective services. This can be a painful step, but it's necessary to ensure the child's safety.
Encourage Healthy Coping Mechanisms: Teach the child healthy ways to handle their emotions, such as through art, music, journaling, sports, or talking about their feelings.
Can a drug addict change?
Yes, a person struggling with drug addiction can certainly change. It's important to understand that addiction is a chronic, but treatable, disease. Like other chronic diseases, it's not about a "cure" but about managing the condition effectively.
Overcoming addiction typically involves a combination of self-awareness, willingness to change, support, and professional treatment. A key part of the process is the individual's motivation to improve their life and overcome their dependency on substances.
However, recovery from addiction often involves setbacks and challenges. The process can be difficult and time-consuming, requiring substantial personal commitment and support from others. Professional treatment can take several forms, including detoxification, medication-assisted therapy, counseling, and support groups.
Many people who were once addicted to drugs have gone on to live productive, healthy, and fulfilling lives. The journey to recovery is often a lifelong process of maintaining sobriety and managing triggers and cravings.
While change is indeed possible for someone struggling with addiction, it is typically a complex process requiring substantial effort, support, and treatment. |
Reactions caused by food allergies are a major concern for food preparation, catering, retail and hospitality businesses. Unlabelled food that contains food allergens can be fatal when consumed, as the death of teenager Natasha Ednan-Laperouse in 2016 illustrates.
Natasha died of anaphylaxis after eating sesame in a takeaway baguette purchased at Heathrow Airport. Her death resulted in ‘Natasha’s Law‘, which ensures food businesses must include full ingredients on pre-packaged food, including food allergens.
Allergen labelling isn’t just reserved for pre-packaged food. The Food Information Regulation 2014 requires that food businesses provide accurate information about allergenic ingredients in all food they produce, process, provide or sell.
The regulations require that any of the 14 food allergens must be included on nutritional labels or in information such as menus. The information should detail the allergens used, and these should be clearly highlighted when listed alongside other ingredients, such as by using bold type or underlining allergens. Labelling should include substances produced or derived from allergens or used in processing the food.
Every year thousands of people suffer illness as a result of food allergens. Our food allergy awareness and online food hygiene course provides fundamental training for food handlers working in catering and other environments where food is prepared, cooked and handled.
What is a food allergen?
An allergen is a substance that causes an allergic reaction in humans or animals.
Allergens take various forms, such as cosmetics and medications, to biological allergens such as moulds, mites and pollen. Food is the largest category of allergens and covers a range of foods, from shellfish to nuts.
Allergic reactions can vary from extremely mild to fatal, with reactions including hives, rashes, respiratory symptoms, anaphylaxis, vomiting, cramps, swelling and headache. Small amounts of sesame consumed by someone with a sesame allergy, for example, can result in skin, gastrointestinal and respiratory reactions that may trigger a fatal anaphylactic response.
Why label the 14 food allergens?
Labelling the 14 food allergens ensures that customers, employees and others can clearly understand which foods contain allergens. All food handlers – including food preparation employees, managers, supervisors and service employees such as waiters – need to understand that food they prepare, sell or serve is safe for customers with allergies or food intolerances.
The UK has some of the highest rates of allergic conditions in the world. According to the Food Standards Agency, around 2% of adults and up to 8% of children have a food allergy. An estimated 44% of British adults suffer from at least one allergy, according to Allergy UK.
The Food Standards Agency has further guidance on how to label food correctly and display allergen information.
14 food allergens to label
Food businesses must label the 14 major food allergens clearly on all food by law. The Foods Standards Agency recommends allergy training for all employees and those involved with the processing, manufacture and supply of food.
This includes pre-packaged foods such as sandwiches, food products such as crisps, cereals and ready meals, and restaurant or takeaway food. Allergen information should be available to all employees involved in food preparation or those who come into contact with food. Regulations apply to all places that supply food, such as canteens, takeaways, restaurants, events or food supplied as part of bed and breakfast accommodation.
Here are the 14 major food allergens you should ensure are clearly labelled on any food.
Celery stalks, leaves, seeds and the root – known as celeriac – are all potential allergens. Celery is used as an ingredient for a wide range of dishes and can be found in salads, soups, meat products, ready meals, stock cubes and celery salt.
Cereals containing gluten
Gluten is a major source of food intolerance and can result in a serious allergic reaction. Gluten-containing cereals include barley, oats, rye and wheat, including Spelt and Kamut. Cereals are a staple ingredient in many dishes and food products containing flour, including cakes, pasta, bread, batter, baking powders, pastry and foods dusted with flour.
Shellfish such as crab, lobster, prawns and scampi are types of crustaceans that can result in an allergic reaction. As well as shellfish dishes, crustaceans are often used as sauces and pastes in various foods, such as curries or salads.
Eggs are typically used in cakes, meat products, mayonnaise, mousses, pasta, quiche, sauces and pastries. Some foods are also brushed or glazed with egg, such as baking and pastry products.
Pieces of fish or any dishes made from fish pose an allergy risk. Fish can also be found in relishes, salad dressings, Worcestershire sauce, stock cubes and are sometimes used to top pizzas.
The seeds of lupin flowers are crushed to make flour and can be used in bread, pastries and pasta.
Milk is a common food allergen. It is an ingredient in butter, cheese, cream, yoghurt, powdered soups and sauces. It can also be found in foods that have been brushed or glazed with milk.
This includes mussels, snails, squid and whelks. These can frequently be sold as mollusc dishes, such as steamed mussels in a sauce, or as an ingredient in fish stews and sauces such as oyster sauce.
This includes liquid mustard, mustard powder and mustard seeds. Mustard can also be found in bread products, curries, marinades, meat products, salad dressings, sauces and soups.
This includes all nuts that grow on trees, such as cashews, hazelnuts, almonds, pistachios, pecans, walnuts and macadamia. Nuts can be found in bread products, crackers, desserts, marzipan, biscuits, sauces, oils, nut powders (often used in Asian curries), stir-fried dishes and ice cream.
Peanuts are actually legumes and are part of the bean family. As they grow underground, they are also known as groundnuts. They are often used as an ingredient in biscuits, cakes, curries, desserts, sauces (such as satay sauce), and groundnut oil and peanut flour.
These seeds can be found in bread (most commonly sprinkled on hamburger buns, for example) as well as breadsticks, houmous, sesame oil and tahini. They are sometimes toasted and used in salads.
This can be found in bean curd, edamame beans, miso paste, textured soya protein, soya flour or tofu. It’s often found in desserts, ice cream, meat products, sauces and vegetarian/vegan products.
Sulphur dioxide (sometimes known as sulphites)
Frequently used as a preservative for dried fruit products such as raisins, apricots, and prunes. They can also be found in soft drinks, meat products, vegetables, and wine and beer. If you have asthma, you have a greater risk of developing a reaction to sulphur dioxide.
Ensure your employees are aware of the risks presented by the 14 food allergens with our food allergy awareness and online food hygiene course, including how to recognise food safety hazards and the importance of food temperature control. |
Examples of ways to differentiate instruction
TeachersFirst's Thinking Teachers who write our resource reviews often have suggestions that have worked in their classrooms. Open the reviews to the "more" view to see ideas for using specific resources as tools to differentiate for a variety of learners. Alternatively, use the keyword search tool at the left of this page to search for a curriculum topic and the term "differentiate." For example, search fractions differentiate (with "all the words" selected for the search).
GradesK to 12
In the ClassroomDownload and use SwifDoo PDF for many of your classroom needs. Edit documents to differentiate instruction based on student interests and abilities. If your original document isn't in PDF format, use a conversion tool such as CleverPDF, reviewed here, to convert your file to PDF and begin using SwifDoo. Use the annotation feature as a collaborative tool for you and your students. For example, add feedback to a student document as an annotation and allow them to respond on the same document. Share the same feature with students working on collaborative projects as a tool for sharing ideas within a single document. Add a password to sensitive documents shared with parents, such as behavior reports or feedback on academic progress. Use the Merge tool to combine multiple files to create remote learning packets, share missed classroom assignments, or create a class handbook with pertinent information.
GradesK to 12
In the ClassroomAdd this extensive search library to your current toolbox of resources for classroom and professional use. Search for ideas when planning upcoming units and lessons and provide differentiated instruction to meet your students' learning needs. Use a learning management system such as Eduflow, reviewed here, or Classkick, reviewed here, to easily create and share personalized instruction that includes resources found on this site along with your current lessons and materials.
Grades6 to 12
In the ClassroomBookmark and save this website as a resource for finding supplemental materials for your classroom and professional development information for personal use. For example, if you teach algebra, use the search feature to find introductory and prealgebra textbooks to reinforce concepts to students differently than your current teaching materials. For students who need enrichment material, take advantage of algebra 2 books to provide differentiated instruction to meet their learning needs. This resource would be ideal to use in a remote learning situation. Consider curating this site along with other open education resources (OER) using Wakelet, reviewed here. Create a collection in Wakelet of your OER resources and share it with colleagues as a professional resource.
GradesK to 12
In the ClassroomUse SchoolStack to differentiate learning for different student needs and abilities by quickly modifying lesson activities to share with individual students or groups. Replace your current homework activities using SchoolStack to provide students with various options for completing learning activities. Offer activities that meet student interests and learning styles within each stack to encourage student interest and participation. When teaching blended learning or remote learning activities, use SchoolStack to share information with students and gather data and feedback from their participation in the lessons.
GradesK to 12
In the ClassroomTake advantage of the free resources found on this site to introduce HyperDocs into your classroom instruction or enhance your current use of hyperdocs. If you are new to using hyperdocs, watch this archived recording of OK2Ask: Believe the Hype! Using HyperDocs for Innovative Instruction, reviewed here, to learn about creating and using HyperDocs. Share this resource with your peers when collaborating on lessons and instructional activities. Use HyperDocs to differentiate instruction for the variety of student needs in your classroom or as a flipped learning activity.
Grades5 to 12
tag(s): art history (80), body systems (41), business (50), chinese (43), drawing (59), environment (221), financial literacy (93), french (71), geology (63), japanese (46), latin (20), music theory (45), narrative (13), novels (27), nutrition (132), oceans (135), OER (41), photography (129), plagiarism (30), poetry (186), psychology (65), robotics (24), romeo & juliet (8), short stories (18), sociology (23), space (204), spanish (100), STEM (225), writers workshop (33)
In the ClassroomBookmark and save this site as a supplemental resource for your current lessons, as a resource for students to learn about subjects not covered in their current courses, and to differentiate learning for students. For example, provide remediation to high school students by sharing the 9th or 10th-grade literature and composition courses as a review activity or enhance your British Literature unit by assigning a module that focuses specifically on 17th, 18th, or 19th-century British literature. Consider assigning different activities to groups of students to present to their peers. Ask them to use an infographic creator such as the Canva Infographic Creator, reviewed here, as a tool for sharing important information. As a final learning extension, create a digital class book using Ourboox, reviewed here, to share understanding of the content learned. Include text, images, maps, and more in the student-created books.
This site includes advertising.
In the ClassroomFlashcard Factory is an excellent tool for both in-person and remote learning. Use this feature to create vocabulary lists for spelling, science terms, social studies events, etc. Differentiate learning by creating lists for different student abilities or interests. Because students are the creators, they are engaged and more motivated in the learning process. Extend learning by asking students to write short stories or create writing journals using the vocabulary words used in the flashcards. For example, search for vocabulary at Read Write Think, reviewed here, to find the lesson plan for My World of Lists: Building Vocabulary Lists. This lesson culminates with students creating a "My World of Words Journal."
Grades5 to 12
tag(s): charts and graphs (165), coordinates (13), data (133), decimals (89), division (101), equations (121), exponents (36), factoring (25), factors (28), fractions (164), functions (52), geometric shapes (133), inequalities (22), multiplication (123), negative numbers (12), number sense (70), place value (36), probability (94), quiz (62), quizzes (82), sequences (11), sequencing (17), Teacher Utilities (127)
In the ClassroomUse Edia to create, share, and differentiate math practice questions with students. Create practice sheets for students on the fly as you assess understanding of concepts or review previously learned material. If parents ask for additional support for their students, create individual practice on the necessary skills. Some of Edia's explanations may take a different approach than methods taught in your classroom, use this to your advantage and ask students to share different methods for approaching problems. Ask students to share their approaches using a screen recording tool such as Free Screen Recorder Online, reviewed here, or with FlipGrid, reviewed here, as a video response that includes use of the included whiteboard tool.
GradesK to 12
In the ClassroomBookmark #GoOpenVA to use as your first stop in lesson planning. Take advantage of the search filters to narrow down the content and grade-level information to suit your needs. This website is also an excellent resource for finding materials to differentiate instruction. Use higher-level activities to challenge gifted students and search for content for remediation. As you gather resources into a collection, or lesson plans, be sure to think about ways to incorporate technology in meaningful ways to enhance and extend learning.
Grades2 to 12
In the ClassroomBookmark and save this brain teaser site to use throughout the school year. Share a problem of the week with your students to complete as homework or during a work center. Provide teasers of different levels of difficulty to differentiate and challenge your students. Enhance student learning by asking them to explain their success in solving challenges and sharing their process to find the correct solution. Use FlipGrid, reviewed here, to share your weekly teasers, then have students create and share a video response. Ask students to use the tools on Flipgrid such as the whiteboard, stickers, and text to explain their responses in detail. Extend learning further by creating a class book using Imagine Forest, reviewed here. Use Imagine Forest to make and share a digital book of brain teasers. Use the interactive elements to add links to audio suggestions for tackling problems or link to video solutions on the final pages of your book.
GradesK to 9
In the ClassroomEnrollment in Mensa isn't required to take advantage of the many resources found on this site for all students. Use the reading lists as a starting point for stocking your class library or a student reading list for the current school year. Encourage students to complete the reading list and return to Mensa for a free t-shirt. Incorporate the lesson plans into your existing curriculum, then differentiate learning as you adapt to student needs. For example, use the Book Review Writing lesson to help students understand the difference between reviews and reports. This lesson also includes specific information on what to have with book reports. Begin by teaching this lesson in small groups, then use Google Jamboard, reviewed here, to create a frame for each of the main topics. Enhance student learning by asking students to add sticky notes with their observations and thoughts. Have your group work together to share their book review using a simple to use blogging tool such as Telegraph, reviewed here. Extend learning further by creating a class podcast sharing book reviews created through the lesson process found on Mensa for Kids. Buzzsprout, reviewed here, is a free tool for creating and publishing podcasts that is appropriate for students of all ages. Use Buzzsprout to record and share book reviews throughout the school year.
GradesK to 8
In the ClassroomSave this site for use as an entire curriculum, or use the materials to supplement your current resources. Use the materials to differentiate learning activities for your students. Provide students additional support using content found at lower grade levels or challenge gifted students with materials from a higher grade level. Use Duck Soup, reviewed here, as an alternative to printed assignments and convert any page into an e-sheet gradable activity.
GradesK to 12
In the ClassroomUse Whiteboard.chat to collaborate with students to share and organize information instantly. This tool even allows educators to auto-correct all boards with a single click! Use the PDF document feature to differentiate instruction with groups of students or individuals. Use the breakout feature to conduct small group meetings or provide personalized instruction to individual students. Allow students to create collaborative drawings as responses to literature. They can map out the plot or themes, add labels, create character studies, and more. Have a group of students create a drawing so that another group can use it as a writing prompt. Use Whiteboard.com as a brainstorming or sketching space as groups (or the class) share ideas for a major project or for solving a real-world problem. Use this site in a computer lab (or on laptops) to draw the setting in a story as it is read aloud. As an assessment idea, have students draw out a simple cartoon with stick figures to explain a more complex process, such as how democracy works. If you are lucky enough to teach in a BYOD setting, have a blended classroom, or are distance teaching, use Whiteboard.chat to demonstrate and illustrate any concept while students use the chat and drawing tools to interact in real-time. If you are studying weather, have students diagram the layers of the atmosphere and what happens during a thunderstorm, for example. Introduce this tool to students who are working on group projects. Alternatively, have students use this to work as partners or as a small team within a breakout area to complete complex math problems or equations.
Grades4 to 12
In the ClassroomShare this site with students and provide time for them to explore on their own. Ask them to share their findings and observations using sticky notes posted to a collaborative Google Jamboard, reviewed here. Enhance student learning using Newsela, reviewed here, to assign texts and articles related to glaciers and climate change. Use Newsela's teaching tools to assign writing prompts and quizzes within any shared articles. Differentiate instruction with Newsela by choosing texts that match the different reading and comprehension levels of your students. Extend learning by asking individuals or groups of students to use Juxtapose, reviewed here, to create a before and after image to demonstrate changes of ice formations over time. Be sure to follow the tips and tricks found on Juxtapose as your students build their interactive images.
GradesK to 12
In the ClassroomTeacherMade is perfect for use in several teaching and learning situations, including blended learning, remote teaching, and differentiated instruction. Upload work assignments and create copies to differentiate activities and scoring options. Use this site to create interactive assignments for students to complete at home or during computer center activities. TeacherMade provides many options for helping and enhancing learning for individual students, use for homework, or as a temporary option for providing instruction to home-bound or remote learning students. Have students upload completed assignments of their choosing to an online portfolio creation tool like kudosWall, reviewed here. Use kudosWall to help students build their work resumes, including reflections on their creative process and personal growth.
Grades1 to 12
In the ClassroomDiscover and use Blooket's many engaging games as a resource for practicing and reviewing information within any area of content. Use the score results to provide feedback for guiding further lessons. Some games are more fast-paced than others; use this to your advantage by sharing different versions for different groups of students. Use Blooket to differentiate instruction by adjusting the difficulty of question sets based on student abilities. Introduce new content using Blooket as a pre-assessment before starting any new unit. Use Blooket as an ice-breaker or get-to-know-you activity at the start of the school year or at the beginning of a new semester to build comradery within your classroom.
Grades9 to 12
In the ClassroomTake advantage of this free textbook to use for your American History curriculum or supplement your current teaching materials. Pick and choose text, source materials, or assessment information to enhance your curriculum. This text is a perfect addition for schools lacking up-to-date content or for use with distance learning. Use a curation tool such as Padlet, reviewed here, to organize and share materials with students. Use the shelf option to create categories and organize them by videos, articles, primary source documents, etc., to make information easily accessible by your students. Encourage students to share their understanding of the content by creating videos, flyers, graphic images, and more using the tools found at Canva Edu, reviewed here. Use the text to speech option to differentiate learning for students with disabilities and English Language Learners.
Grades6 to 12
In the ClassroomTake advantage of the free lesson plan that accompanies the videos on this playlist as part of your American History and WWII lessons. Consider sharing a video at the start of a lesson to engage students in learning about discriminatory policies' personal toll during the war. Use a discussion tool such as Answer Garden, reviewed here to gather student responses and create word clouds to encourage classroom discussion. Add videos from the playlist to other activities within a teacher utility such as TES Teach Blendspace, reviewed here. Use Blendspace to add additional reading activities, quizzes, and more content to deliver lessons for distance learning or as a tool for self-paced learning. Easily differentiate learning by copying your original Blendspace learning then modifying activities based upon student needs. Extend learning by having students share their understanding of internment camps by presentations using Sway, reviewed here that includes student writing responses, images, videos, and more. Another option is to offer students the choice of building an interactive timeline using History in Motion, reviewed here that offers users the option to include maps, add events, include source materials, and more.
GradesK to 6
In the ClassroomUse Boddle to differentiate math instruction for your students. Include a link to Boddle on classroom computers for use during math centers, and provide a link on your classroom website for your blended classroom or for students to play at home. Share with parents to create individual accounts for use at home. Include Boddle with other online math programs using a bookmarking tool such as Symbaloo, reviewed here, for easy access by students.
Grades3 to 12
tag(s): agriculture (44), climate (78), climate change (80), design (84), forests (27), oceans (135), recycling (46), remote learning (53), solar energy (33), STEM (225), Teacher Utilities (127), water (94) |
Imagine a beautifully organized kitchen, with vibrant fruits and vegetables neatly arranged in a bowl, and freshly baked bread cooling on the counter. The air is filled with the aroma of delicious meals being prepared. But behind this picture-perfect scene lies a hidden danger – the risk of cross-contamination. It only takes a single mistake to turn this culinary paradise into a breeding ground for harmful bacteria.
That’s why it’s crucial to understand the importance of proper food storage techniques. One of the most effective ways to prevent contamination is by storing raw food away from other food. By doing so, we can minimize the risk of harmful bacteria spreading and ensure the safety of our meals.
In this article, I will delve into the reasons why this simple yet powerful practice is essential for maintaining food safety. Let’s explore the world of cross-contamination and discover the preventive measures we can take to keep our kitchens safe and our meals delicious.
- Storing raw food away from other food prevents cross-contamination.
- Proper storage techniques include using sealed containers or plastic bags for raw meat, poultry, and seafood.
- Keeping raw foods on the bottom shelf of the refrigerator minimizes the risk of leaks onto other foods.
- Using separate cutting boards, utensils, and plates for raw and cooked foods helps prevent the spread of harmful bacteria.
The Importance of Food Safety Practices
You should always remember to store raw food away from other food in order to prevent any kind of contamination. This is one of the most important food safety regulations and best practices.
Keeping raw food separate from other food items is crucial to avoid the transfer of harmful bacteria and pathogens. Cross-contamination can occur when raw food comes into contact with ready-to-eat food or surfaces used for food preparation. By storing raw food separately, you minimize the risk of spreading bacteria and protect the safety of your meals.
Food safety regulations dictate that raw meat, poultry, and seafood should be stored in sealed containers or wrapped securely to prevent any juice or drippings from contaminating other food items. It is recommended to store raw food on the lowest shelves of the refrigerator to prevent any potential leaks from dripping onto other foods. Additionally, storing raw food separately reduces the chances of cross-contamination during transportation and handling.
Understanding cross-contamination is essential to maintaining food safety. Cross-contamination occurs when bacteria from one food item spreads to another, either directly or indirectly. By storing raw food away from other food, you minimize the risk of cross-contamination and ensure the safety of your meals.
Understanding cross-contamination is crucial for maintaining food safety. Cross-contamination occurs when harmful bacteria or other microorganisms are transferred from one surface or food to another, leading to the potential spread of foodborne illnesses.
The main causes of cross-contamination are improper handling of raw food, inadequate cleaning and sanitizing of kitchen surfaces, and using the same utensils or cutting boards for different types of food.
The potential risks and consequences of cross-contamination can be serious, as it can result in foodborne illnesses that can lead to hospitalization or even death, especially for vulnerable populations such as young children, pregnant women, and the elderly.
Definition and Causes
To grasp the concept and causes of food contamination, imagine a pantry where raw food is kept at arm’s length from other food items, acting as a shield against the potential invasion of harmful bacteria. Here are three important aspects to consider:
Proper storage techniques: Storing raw meat, poultry, fresh produce, raw eggs, and kitchen tools and equipment separately helps prevent cross-contamination.
Labeling and organization: Properly labeling food storage containers and using the FIFO (first in, first out) method when organizing the refrigerator and pantry can minimize the risk of contamination.
Safe handling and leftovers: Educating others about safe handling practices and ensuring leftovers are stored promptly and at the correct temperature contribute to food safety.
Understanding these preventive measures is crucial in mitigating potential risks and consequences associated with foodborne illnesses.
Transitioning into the subsequent section, it’s important to explore the potential risks and consequences in more detail.
Potential Risks and Consequences
Be aware of the potential risks and consequences that can arise from food contamination. When raw food is not stored properly and away from other food, it can lead to potential health hazards.
Cross-contamination can occur, where harmful bacteria from raw food can transfer to other items, such as fruits and vegetables, leading to foodborne illnesses. These illnesses can range from mild stomach discomfort to more severe symptoms, such as vomiting and diarrhea.
To prevent such risks, it’s important to follow preventive strategies, such as storing raw food separately and using separate cutting boards and utensils for raw and cooked foods. By practicing proper storage techniques, you can minimize the chances of contamination and ensure the safety of your meals.
Proper Storage Techniques
Keep your raw food stored separately from other items like a fortress protecting your precious groceries, warding off the lurking dangers of cross-contamination. To ensure the safety of your food, it’s crucial to follow proper storage techniques.
First and foremost, maintaining the proper temperature is essential. Raw food should be stored at temperatures below 40°F (4°C) to prevent the growth of harmful bacteria.
Additionally, proper packaging is key. Store raw food in leak-proof containers or wrap them tightly in plastic wrap to avoid any potential leakage that could contaminate other items.
Furthermore, it’s important to take into account the specific needs of different types of raw food. For example, raw meat should be stored on the bottom shelf of the refrigerator to prevent any juices from dripping onto other items. Similarly, raw seafood should be stored in a separate container to avoid any potential cross-contamination with other food items.
By following these proper storage techniques, you can greatly reduce the risk of cross-contamination and ensure the safety of your food.
In the next section, we’ll discuss preventive measures in the kitchen to further enhance food safety.
Preventive Measures in the Kitchen
Get creative in the kitchen and implement some simple steps to make sure your meals are safe and delicious. One important aspect of food safety is proper cleaning and sanitizing procedures. By following these practices, you can prevent contamination and keep your kitchen a safe environment for preparing meals.
To start, always remember to wash your hands thoroughly before handling any food. This will help remove any bacteria or germs that may be present on your hands.
Additionally, regularly clean and sanitize your kitchen surfaces, utensils, and cutting boards. This will eliminate any potential sources of contamination.
When it comes to storing raw food, it is crucial to keep it separate from other food items. This prevents cross-contamination, where harmful bacteria from raw food can spread to other foods and cause illness. Store raw meat, poultry, and seafood in sealed containers or plastic bags to prevent any juices from leaking and contaminating other foods.
By implementing these preventive measures, you can ensure that your meals are safe and free from any harmful contaminants.
In the subsequent section, we will explore common sources of cross-contamination and how to avoid them. Remember, food safety starts with proper cleaning and sanitizing procedures.
Common Sources of Cross-Contamination
When it comes to preventing cross-contamination in the kitchen, there are a few common sources that need to be addressed.
First, raw meat and poultry can easily contaminate other foods if they’re not handled properly.
Second, fresh produce, such as fruits and vegetables, can also be a source of cross-contamination if they’re not washed thoroughly.
Lastly, kitchen tools and equipment, such as cutting boards and knives, can harbor bacteria if they’re not cleaned and sanitized properly.
It’s important to be vigilant in these areas to ensure food safety.
Raw Meat and Poultry
Storing raw meat and poultry separately from other food items is a crucial measure to avoid cross-contamination, ensuring that harmful bacteria doesn’t spread and contaminate other ingredients. Did you know that a single drop of raw chicken juice can contain millions of bacteria?
To ensure safe handling, it’s essential to follow these guidelines:
- Store raw meat and poultry in sealed containers or leak-proof bags to prevent any drips or spills.
- Keep raw meat and poultry on the bottom shelf of the refrigerator to prevent any potential leaks from contaminating other foods.
- Use separate cutting boards, utensils, and plates for raw meat and poultry to avoid cross-contamination during preparation.
Proper cooking is another important step in preventing foodborne illnesses. It’s crucial to cook raw meat and poultry to the recommended internal temperatures to kill any harmful bacteria.
Now let’s move on to the next section about fresh produce and raw eggs.
Fresh Produce and Raw Eggs
Fresh produce and raw eggs should always be handled and stored separately from meat and poultry to minimize the risk of cross-contamination.
When it comes to fresh produce handling, it’s important to wash fruits and vegetables thoroughly under running water to remove any dirt or bacteria. Additionally, cutting boards and utensils used for handling raw produce should be cleaned and sanitized before using them for other foods.
As for egg safety, it’s crucial to store eggs in the refrigerator at or below 40°F to prevent the growth of harmful bacteria like Salmonella. It’s also recommended to cook eggs thoroughly to kill any potential pathogens.
Proper handling and storage of fresh produce and raw eggs play a vital role in ensuring food safety.
Moving on to kitchen tools and equipment, it’s essential to keep them clean and in good condition to prevent contamination.
Kitchen Tools and Equipment
Make sure you keep your kitchen tools and equipment clean and in good condition to ensure a safe and hygienic cooking environment. Proper cleaning of kitchen tools is crucial for kitchen sanitation.
Regularly wash and sanitize cutting boards, knives, utensils, and other equipment to prevent cross-contamination between different foods. Use hot soapy water and a scrub brush to remove any visible dirt or residue from the tools. Then, sanitize them by soaking in a solution of one tablespoon of bleach mixed with one gallon of water. Rinse thoroughly and air dry before using again.
Remember to also check for any damaged or worn-out tools that may harbor bacteria or pose a safety hazard. By practicing proper cleaning and maintenance of kitchen tools, you can reduce the risk of foodborne illnesses.
Transitioning into the next section, understanding foodborne illnesses is essential for maintaining a safe kitchen environment.
Understanding Foodborne Illnesses
To keep yourself safe from foodborne illnesses, it’s important to store raw food separately from other food items. This helps prevent cross-contamination, which is when bacteria from raw foods spread to other foods and cause illness.
When different types of food are stored together, the juices from raw meat, poultry, or seafood can drip onto other foods, such as fruits and vegetables, leading to the transfer of harmful pathogens.
To ensure proper foodborne illness prevention, follow these guidelines:
- Store raw meat, poultry, and seafood in sealed containers or plastic bags to prevent their juices from leaking onto other foods.
- Keep raw foods on the bottom shelf of the refrigerator, away from cooked or ready-to-eat foods, to prevent any potential drips or spills from contaminating other items.
- Use separate cutting boards, utensils, and plates for raw and cooked foods to avoid cross-contamination during food preparation.
By following these practices, you can reduce the risk of foodborne illnesses caused by common pathogens like salmonella, E. coli, and listeria.
Next, let’s explore the importance of labeling and organization in maintaining food safety and preventing cross-contamination.
Importance of Labeling and Organization
Properly labeling food storage containers is crucial to maintaining food safety. By clearly indicating the contents and expiration dates on each container, it becomes easier to identify and use items in a timely manner. Additionally, following the FIFO (First-In, First-Out) method ensures that older products are used before newer ones, reducing the risk of food spoilage.
Lastly, organizing the refrigerator and pantry allows for easy access to items, minimizing the chances of cross-contamination and ensuring that perishable foods are stored at the correct temperatures.
Properly Labeling Food Storage Containers
Labeling food storage containers ensures that raw food is kept separate from other food, preventing any potential contamination. With each label acting as a culinary sentinel, the risk of cross-contamination is thwarted, allowing for a harmonious coexistence of flavors and ingredients.
To properly label food storage containers, there are a few key practices to follow:
Use clear and concise labels: Clearly state the contents and date of storage on each container to easily identify the food and ensure it’s used within a safe timeframe.
Store similar items together: Grouping similar foods together makes it easier to locate specific ingredients and reduces the chances of accidentally using the wrong item.
Regularly check and update labels: As foods are used or new ones are added, it’s important to update the labels accordingly to maintain accurate information.
By following these proper storage practices and labeling food storage containers, we can effectively prevent cross-contamination and ensure the safety of our food.
Now, let’s delve into the next section about the FIFO (first-in, first-out) method.
FIFO (First-In, First-Out) Method
Make sure you’re using the FIFO method to keep your ingredients fresh and minimize waste.
The FIFO method, which stands for First-In, First-Out, is a simple but effective way to ensure that the oldest items in your food storage are used first. This method involves organizing your ingredients so that the ones with the closest expiration dates are in the front, while the newer ones are stored in the back. By doing this, you can prevent food from spoiling and avoid unnecessary waste.
Implementing the FIFO method also helps to maintain the quality of your ingredients, as you’re constantly using the oldest ones before they expire.
Once you’ve mastered the FIFO method and food rotation, you can move on to organizing your refrigerator and pantry seamlessly.
Organizing Refrigerator and Pantry
Now that we’ve discussed the FIFO (First-In, First-Out) method, let’s turn our attention to organizing our refrigerator and pantry for safe food handling.
It’s important to keep raw food separate from other food items to prevent cross-contamination. Storing raw food away from other food is an example of proper sanitation. By doing so, we reduce the risk of harmful bacteria spreading and causing foodborne illnesses.
When organizing your refrigerator, make sure to keep raw meat, poultry, and seafood on the bottom shelf, away from ready-to-eat foods. Use separate containers or bags to store raw food in the pantry, keeping them away from other packaged items.
Maintaining this organization will ensure that your food stays safe and prevent any potential contamination.
Speaking of safe handling, let’s now move on to the next section about the proper handling of leftovers.
Safe Handling of Leftovers
Storing raw food separately from other food is like building a fortress to protect your leftovers from the sneaky bacteria invaders. Safe handling is crucial when it comes to preventing contamination and ensuring the safety of your leftovers.
Leftovers can easily become a breeding ground for harmful bacteria if not handled properly. To safely handle leftovers, it’s important to follow a few simple guidelines.
First, make sure to refrigerate leftovers promptly after they’ve cooled down. Bacteria thrive in warm temperatures, so keeping them chilled slows down their growth. Additionally, it’s important to store leftovers in airtight containers to prevent cross-contamination with other foods in the refrigerator. This will help maintain the integrity of your leftovers and prevent any potential contamination.
Furthermore, when reheating leftovers, always make sure they reach a safe internal temperature of 165°F (74°C) to kill any bacteria that may be present. This can be easily done using a food thermometer. Lastly, it’s essential to consume leftovers within a few days to minimize the risk of foodborne illness.
By following these safe handling practices, you can protect yourself and your loved ones from foodborne illnesses caused by contaminated leftovers. Educating others on food safety is important to ensure everyone understands the importance of safe handling and preventing contamination.
Transitioning into the next section, it’s crucial to spread awareness and share these essential food safety practices with others.
Educating Others on Food Safety
Take a moment to educate those around you about the importance of food safety to ensure their well-being and protect them from the potential dangers lurking in their leftovers. Food safety education is crucial in promoting hygiene practices and preventing foodborne illnesses. Here are three key points to convey to others:
Proper storage: Teach them the significance of storing raw food away from other food to prevent cross-contamination. This simple step can help avoid the spread of harmful bacteria and keep their meals safe to consume.
Temperature control: Emphasize the importance of keeping hot foods hot and cold foods cold. Explain that maintaining proper temperatures can inhibit the growth of bacteria, reducing the risk of foodborne illnesses.
Hand hygiene: Educate them about the necessity of washing hands thoroughly before handling food. Stress that proper handwashing can eliminate harmful bacteria and prevent its transfer onto food, ensuring safe consumption.
By sharing this knowledge, we can all contribute to a safer food environment for ourselves and our loved ones.
Transitioning into the next section about continuous improvement and adaptation, it’s important to remember that food safety practices are constantly evolving.
Continuous Improvement and Adaptation
Ensure the continuous improvement and adaptation of your food safety practices by staying informed about the latest advancements and guidelines in order to protect yourself and others from potential risks. Continuous improvement refers to the ongoing effort to enhance and refine your food safety practices based on new information and best practices. By staying up to date with the latest advancements, you can identify areas for improvement and implement changes to prevent foodborne illnesses.
Adaptation is crucial in the ever-evolving field of food safety. As new risks emerge and technologies advance, it is important to adapt your practices accordingly. This involves being open to change, embracing new methods, and adjusting your approach as needed. By remaining adaptable, you can address emerging risks and ensure that your food safety practices remain effective.
To visually represent the ideas of continuous improvement and adaptation, I have created a table:
|Stay informed about the latest advancements and guidelines||Be open to change and embrace new methods|
|Identify areas for improvement and implement changes||Adjust your approach to address emerging risks|
|Refine your food safety practices based on new information||Ensure that your practices remain effective|
By incorporating continuous improvement and adaptation into your food safety practices, you can enhance your ability to prevent contamination and protect the health and well-being of yourself and others. Stay informed, be open to change, and continually refine your practices to stay ahead of potential risks.
Frequently Asked Questions
What are the potential consequences of not storing raw food away from other food?
Not storing raw food away from other food can lead to potential health risks and increase the likelihood of foodborne illnesses. Cross-contamination may occur, where bacteria from raw food can transfer to other foods, causing contamination and potential illness when consumed. This can result in symptoms such as nausea, vomiting, diarrhea, and even more serious complications.
It’s essential to store raw food separately to minimize the risk of contamination and ensure food safety.
Can cross-contamination occur even if raw food is stored separately from other food?
Yes, cross-contamination can still occur even if raw food is stored separately from other food. This happens when there’s improper handling or transfer of the raw food. For example, if I use the same cutting board or knife for both raw meat and ready-to-eat food without properly cleaning them in between, bacteria from the raw food can contaminate the other food.
It’s important to practice good food safety measures to prevent cross-contamination.
Are there any specific types of raw food that are more prone to causing cross-contamination?
Certain types of raw meat are more prone to causing cross-contamination, posing risks to food safety. Raw poultry, such as chicken and turkey, is known to be a major source of bacterial contamination, including pathogens like Salmonella and Campylobacter.
Ground meats, such as beef or pork, also carry a higher risk due to the increased surface area exposed to potential contaminants. It is crucial to handle and store these raw meats properly to prevent the spread of harmful bacteria and ensure food safety.
How can proper storage techniques help prevent cross-contamination?
Proper storage techniques are crucial in preventing cross-contamination. One important aspect is proper labeling, which helps identify and separate different types of raw food. This prevents the spread of harmful bacteria from one food item to another.
Additionally, best practices for storing raw food in a refrigerator include keeping them in sealed containers or bags, placing them on lower shelves to prevent drips onto other foods, and maintaining appropriate temperature conditions.
Following these guidelines ensures the safety and quality of our food.
Is there a specific distance or separation required between raw food and other food to prevent cross-contamination?
To prevent cross-contamination, it’s important to maintain a specific distance and separation between raw food and other food. While there aren’t any set rules regarding the exact measurements, it’s recommended to store raw food in separate containers or on different shelves. This helps minimize the risk of contamination by ensuring that harmful bacteria or pathogens from raw food don’t come into contact with ready-to-eat items. By doing so, the likelihood of foodborne illnesses is reduced.
In conclusion, practicing proper food safety measures is crucial to prevent cross-contamination and ensure the health and well-being of ourselves and others.
One interesting statistic to note is that according to the Centers for Disease Control and Prevention (CDC), approximately 48 million people in the United States get sick from foodborne illnesses each year. This highlights the importance of following preventive measures, such as storing raw food away from other food, to reduce the risk of contamination and protect our health.
Remember, a little caution goes a long way in maintaining food safety.
With her ability to convey complex concepts in a clear and accessible manner, Belinda ensures that readers of all backgrounds can grasp the benefits and techniques of raw food. She excels at breaking down scientific information into digestible pieces, allowing readers to understand the impact of raw food on their bodies and encouraging them to make informed choices about their diet.
One of Belinda’s notable contributions to rachaelsrawfood.com is her collection of mouthwatering recipes. She delights in experimenting with various combinations of raw ingredients, exploring innovative ways to create delicious and nutritious meals. Belinda’s recipes showcase the incredible flavors and textures of raw food and emphasize its versatility, dispelling any misconception that a raw food diet is limited or monotonous.
In addition to her writing responsibilities, Belinda actively engages with the raw food community, attending workshops, seminars, and conferences to expand her knowledge and network. She enjoys connecting with like-minded individuals, exchanging ideas, and staying up to date with the latest trends and advancements in the field of raw food nutrition. |
Getting and keeping kids motivated in school is hard work. As a physical education teacher, it can feel nearly impossible. Some kids are athletic, while others don’t like playing sports; others prefer competitive contact activities or are uncomfortable getting physical. As a PE teacher, you can offer prizes and trophies to students for completing activities, but these external motivators will quickly lose their appeal and you’ll be left trying to find more rewards just to keep your students interested.
The key to getting and keeping your students motivated in PE is by developing their intrinsic motivation. Intrinsic motivation is the pleasure students get from engaging in or completing an activity. To help get you started, we have put together four strategies to build your students’ intrinsic motivation. While the below tips are intended for PE teachers, they are applicable for all subjects and can easily be altered to develop students’ intrinsic motivation in math, ELA, science, and beyond.
1. Develop activities that build on students’ interests
The first step is getting to know your students. You don’t always need to rely on competitive team sports in your PE instruction. If students like to dance, design a step or cultural-dancing unit. If you want to develop their collaboration skills in the process, work in team building exercises through partner and group dancing. This link offers strategies to learn about your students’ interests.
2. Increase opportunities for self-directed learning
Let students take ownership of their learning by allowing them to choose their personal goals (e.g., 4 sets of 25 pushups vs. 100 at once), and offer options of how students can demonstrate knowledge of a task or acquisition of a skill. The examples of self-directed, student-centered learning on this link can easily be modified for your PE instruction.
3. Use task progressions
Before diving into complex tasks, which will likely intimidate and discourage some of your students, start with simple forms of a skill, so students can build self-efficacy and ability in a non-judgmental way. For example, when introducing students to softball, teach them the fundamentals of throwing and catching, swinging a bat, reading a pitch, running the bases, fielding a grounder, and tracking a ball before engaging them in a game.
4. Set up activities that promote success
Don’t set your students up for failure by creating unattainable goals like running a six-minute mile. Instead, provide activities that they can accomplish with hard work. Ask an athletically gifted student to model the task so students know it is possible. Then, modify the requirements of the activity based on students’ strengths and weaknesses. When students succeed in an appropriately challenging task they will be proud of their performance, which can lead to more interest and a willingness to take on more challenging work.
PE is meant to engage and motivate students in fun, authentic physical activity in order to promote a healthy and active lifestyle! |
Reading Main Idea Worksheets For Free. Web all of these are different ways of asking students to find the main idea of a text. Students are asked to identify the main ideas and supporting details of short texts.
Worksheets are grade 6 main idea, main idea details, main idea reading work, main ideas, identifying main idea and supporting. This is a pivotal skill at all levels. Read all the passage and find the main idea for us. |
Theorem: If A and B are two independent events, then the probability that both will occur is equal to the product of their individual probabilities.
Proof: Let event
A can happen is n1ways of which p are successful
B can happen is n2ways of which q are successful
Now, combine the successful event of A with successful event of B.
Thus, the total number of successful cases = p x q
We have, total number of cases = n1 x n2.
Therefore, from definition of probability
P (A and B) =P(A∩B)=
We have P(A) =,P(B)=
If, there are three independent events A, B and C, then
=P(A) x P(B) x P(C).
In general, if there are n independent events, then
Example: A bag contains 5 green and 7 red balls. Two balls are drawn. Find the probability that one is green and the other is red.
Solution: P(A) =P(a green ball) =
P(B) =P(a red ball) =
By Multiplication Theorem
P(A) and P(B) = P(A) x P(B) = |
- Shifts the bits of a number to the right.
- The number.
- The number of bits to shift.
shr() function takes a number and a bit count, and returns the result of shifting the bits of the number to the right by that count.
Numbers in PICO-8 are stored using a 32-bit fixed point format, with 16 bits for the integer portion, 16 bits for the fractional portion, and a Two's Complement representation for negative and positive values. Bit shifting uses the entire number representation. (See examples below.)
shr() performs an "arithmetic shift," which means that the sign of the number is preserved. Arithmetic right shift preserves the highest bit while also shifting a copy of it to the right, which effectively preserves the sign in Two's Complement representation.
The alternative to arithmetic shift is "logical shift." See lshr().
Superseded by >> operator Edit
>> operator added in 0.2.0 performs the same function as
shr() and is now the recommended way to shift bits right, as it uses fewer tokens, costs fewer cycles at runtime, and runs on the real host CPU much more efficiently. Simply replace
-- 8 = 0b00001000 binary -- 1 = 0b00000001 binary print(shr(8, 3)) -- 1 -- 1.000 = 0b0001.0000 binary -- 0.125 = 0b0000.0010 binary print(shr(1, 3)) -- 0.125 -- -1.000 = 0b1111111111111111.0 binary (two's complement) -- -0.125 = 0b1111111111111111.111 binary print(shr(-1,3)) -- -0.125 print(8 >> 3) -- 1, preferred method |
A coalition government are usually formed in a parliamentary government in which no party forms a majority after an election. In this case, multiple political parties agree to support one another to form a government and make up the majority of a single party. A coalition government can also be formed when a country experiences crisis such as civil war or an economic crisis. In such cases, legitimacy is derived from the need for stability and the preference for unity over political strife.
Coalition Governments Around The World
There are several countries in the world where a coalition government has ruled in the past or those that currently have a coalition governing. Some examples include Latvia, Lebanon, Greece, India, Italy, Japan, Ireland, Israel, Kenya, Thailand, Trinidad and Tobago, and Ukraine. The United Kingdom has also operated under coalition government where the Liberal Democrat party and the Conservative party famously joined hands to form the government in 2015.
Coalition governments in the United Kingdom have been formed due to a national crisis or a hung parliament, where no one party reaches a majority. In the last 120 years, the United Kingdom has seen six coalition governments. These six coalition governments have involved two parties; the Conservative party and the Liberal party. The most well-known coalition government in the United Kingdom was the one that operated during World War II from 1931 to 1940.
Coalition governments in Germany have been a regular occurance since their political parties general fail to reach majority in national elections. The major political parties in Germany that have been apart of these coalitions are the Christian Social Union (SCU) in Bavaria, Christian Democratic Union (CDU) of Germany, and the Social Democratic Party (SDP) of Germany. For instance, between 1998 and 2005, the Free Democratic Party (FDP) and Social Democratic Party of Germany (SPD) formed a coalition government. In Germany the coalition governments have never been formed by more than two parties.
Advantages of Coalition Governments
The significant advantage of coalition government is that it provides good governance. The government makes the decision that is reflective of the interests of more than one party, and therefore a larger portion of the electorate. Coalition government general debate until a consensus is formed about the direction of certain policies. Governments that have a single party forming the majority is more likely to implement policies that may not be wdely accepted and therefore might not be in the interest of the majority across the nation.
Disadvantages of Coalition Governments
Ideology is fundamental for navigating through difficult economic and political matters. Coalition governments lack philosophies that are unifying therefore not able to have the long-term solution to political disagreements. Coalition governments may end up being unstable since they require reforms at frequent intervals. It is therefore quite difficult for such a government to push through with the major reforms. Historically, it has also taken coalition governments a long time to form, resulting in a less efficient governance system. |
Accessible architecture or inclusive architecture or design means how the architect thinks about projects in anticipation of future customer needs. It represents the design of an environment so that it can be accessed and used by as many people as possible, regardless of age, gender and disability, in other words, places that are inclusive.
Nowadays, architects, designers, planners, engineers, consultants, and technical specialists should make an effort to create environments that encourage social interaction, integration, communication, and respect. Places that celebrate diversity and difference since an environment that is designed inclusively is not just relevant to buildings; it also applies to surround and other open spaces, wherever people go to do any activity every day, including shops, offices, hospitals, leisure facilities, parks, and streets.
Inclusive design always keeps the diversity and uniqueness of each individual in mind. Moreover, the accessible architecture is not only made up of people with reduced mobility or disabled but in general, it also takes into account obese people, children, elderly or even pregnant women.
In addition to this, it is really important that built environment professionals include potential users at all stages of the design process. It means that these people who inspire this way of design should be involved from the design brief and detailed design through to construction and completion. Where possible, it is important to incorporate disabled people in the design process.
The Principles of Inclusive Design
Now let’s talk about the principles of inclusive design as it relates to the built environment:
-Inclusive – so everyone can use it safely, easily and with dignity.
-Responsive – taking account of what people say they need and want.
-Flexible – so different people can use it in different ways.
-Convenient – so everyone can use it without too much effort or separation.
-Accommodating for all people, regardless of their age, gender, mobility, ethnicity or circumstances.
-Welcoming – with no disabling barriers that might exclude some people.
-Realistic – offering more than one solution to help balance everyone’s needs and recognizing that one solution may not work for all.
The importance of this type of architecture is that this thinking on the part of the architect in accessibility will be really important because it will allow more security and physical integrity to the people in order to enjoy the spaces without limitations. In this case, architecture becomes something that enables a world where everyone can participate equally. It is important that people involved in the process embrace and celebrate diversity and see this as a great opportunity for creativity.
Benefits of Inclusive Design
There are many benefits that can be obtained thanks to inclusive architecture and its positive impact on society; some of them are the following:
– Independent Living: Inclusive Design ensures that disabled people are not forced out of their community and are encouraged to live an independent life.
– Aging Population: By designing environments to be inclusive this can ensure that older generations can stay as active members of their communities.
– Social Inclusion: Social inclusion enables disabled people to fully participate in society. An environment that is designed to be inclusive promotes equality and makes life easier and safer for everyone.
Regardless of the circumstance, all people deserve the same opportunities to participate in society. Inclusive architecture becomes something that enables a world where everyone can participate equally. It can create solutions that are inclusive and usable by all people no matter their diversity. It has been said that design enlightens and improves the quality of life and great design is something that should be available to all sectors of society. |
Severe acute respiratory virus-2 (SARS-CoV2) is responsible for the severely contagious disease called coronavirus disease-2019 (COVID-19). This novel virus was first detected in Wuhan province of China in December 2019. The exact origin of the virus is attributed to Huanan “wet” market that sells live animals.
The virus is a zoonotic disease, and the Chinese horseshoe bats as the initial hosts before it transmitted to humans. Since its identification in December 2019, the disease has already spread to 185 countries around the world including Nigeria and other sub-Saharan countries. It is estimated that 2.3 million people have confirmed Coronavirus and about 153,379 people have died from this disease as of April 17th, 2020. Nigeria has about 493 confirmed cases and 17 deaths. The hardest-hit countries are currently all in the western world with the top five countries being, United States (leading) followed by Spain, Italy, France and Germany. It was declared a pandemic disease by the World Health Organization (WHO) in February of 2020.
While there have been prior severe respiratory viral illness outbreaks like SARS (severe acute respiratory syndrome) in 2003 and MERS-Cov (middle east respiratory syndrome) in 2012, coronavirus is by far the most contagious disease and has resulted in significant amounts of infected persons and death from the disease. Similar to the other respiratory virus that can cause “catarrh” or the common cold, this virus spreads from infected person to healthy person via expelled droplets in the air. The expelled droplets come about when a person coughs or sneezes.
These viral droplets can travel up to 1.8 meters (six feet). Another major form of transmission is fomite transmission and can occur when a healthy person touches a surface already touched by an infected person and then touches their nose, mouth or eyes. Interesting enough, this virus can survive on surfaces for a long time. It can survive on cardboard for 24 hours and on plastic and stainless steel for 72 hours. Recently, the medical community has discovered that asymptomatic infected people can still spread this virus just through normal breathing.
The most common symptom is fever in about 90% of infected persons followed by cough, tiredness (myalgias) and shortness of breath. Other additional symptoms include not being able to smell or taste the food. 81% of infected persons will have mild symptoms and not require hospitalization. Moreover, their symptoms would improve with normal cold remedies. About 19 % of infected persons will become severely ill and require hospitalization and out of that 19% person, 5% will be critically ill and require intensive care.
Older age and underlying medical conditions like hypertension, diabetes and heart disease are the biggest risk factors for death. The risk of dying from the disease was <1% in infected persons less than 50 years of age and then jumps to 8% in 70-79 years and 14% in 80 years or greater. Unfortunately this is a disease that kills the elderly at a very high rate and most especially if they already have underlying medical conditions. This is not to say, it does not affect young people, about 5% of severely ill persons are less than 20 years of age and some have died from the disease.
The most common cause of death is respiratory failure (inability to breathe). Cardiac complications due to weakened heart function (heart failure) and dangerous heart rhythms can occur and lead to death. Management of these severely ill persons is very resource-intensive. They need ventilators to assist with breathing until the lungs have recovered. The impact of a similar epidemic currently going on in Europe and the United States would be extremely devastating and catastrophic in West African countries given lack of properly functioning health infrastructure.
Prevention and disease containment are the best ways to currently address the coronavirus pandemic, given that there is no vaccine or research-proven curative therapy. These measures include:
- Frequent handwashing and social distancing if possible.
- Wearing facemask and it does not have to be a medical-grade mask, covering the face with cloth or scarf may suffice if in close-quarter areas(less than 1.8 meters from the nearest persons).
- If possible, limiting going outside or being in overcrowded areas.
- If you have symptoms, cover your mouth when you cough or sneeze and stay home to avoid spreading the infection to other healthy persons.
- Increased community testing and contact tracing are ideal but may not be feasible in poorer health systems.
In addition to these physical measures to reduce the spread of coronavirus, it is also important to practice self-wellness. Remember physical distancing does not mean social isolation, connect with loved ones via phone calls and text messages. Maintain healthy spiritual health and focus on positive thoughts. In conclusion, the novel coronavirus has progressed rapidly to a world pandemic within a short amount of time. There are still a lot of unknowns about this virus, and the long-term risk associated with it. There are still active ongoing research efforts to learn more and hopefully contain/completely eradicate the outbreak. Until that happens, preventive measures remain the focal means to mitigate the health risk.
· John Hopkins arcgis.com
· Africa CDC COVID-19 updates
Ifeoma Onuorah Ezenekwe is an Assistant Professor with the Division of Cardiology at Emory University School of Medicine Atlanta Georgia since 2017. Prior to joining Emory University, she worked as an attending cardiologist at the WJB Dorn VA, Columbia South Carolina where she also held the title of Clinical Assistant professor with the University of South Carolina Medical school. Dr Onuorah Ezenekwe obtained her Bachelor's of Science in nursing from the University of Texas Medical Branch Galveston graduating with high honors. She completed her medical school training at the University of Texas, San Antonio and was inducted into Alpha Omega Alpha honor society (AOA) at graduation. She completed her Internal Medicine residency training at the prestigious John Hopkins’s Osler training program in Baltimore Maryland and General Cardiology Fellowship at Thomas Jefferson University Hospital, Philadelphia Pennsylvania. |
This article by Leanne Rowlands, KESS 2 PhD Researcher in Neuropsychology at the School of Psychology, Bangor University, is republished from The Conversation under a Creative Commons license. Read the original article here.
Take the following scenario. You are nearing the end of a busy day at work, when a comment from your boss diminishes what’s left of your dwindling patience. You turn, red-faced, towards the source of your indignation. It is then that you stop, reflect, and choose not to voice your displeasure. After all, the shift is nearly over.
This may not be the most exciting plot, but it shows how we as humans can regulate our emotions.
Our regulation of emotions is not limited to stopping an outburst of anger – it means that we can manage the emotions we feel as well as how and when they are experienced and expressed. It can enable us to be positive in the face of difficult situations, or fake joy at opening a terrible birthday present. It can stop grief from crushing us and fear from stopping us in our tracks.
Because it allows us to enjoy positive emotions more and experience negative emotions less, regulation of emotions is incredibly important for our well-being. Conversely, emotional dysregulation is associated with mental health conditions and psychopathology. For example, a breakdown in emotional regulation strategies is thought to play a role in conditions such as depression, anxiety, substance misuse and personality disorders.
How to manage your emotions
By their very nature, emotions make us feel – but they also make us act. This is due to changes in our autonomic nervous system and associated hormones in the endocrine system that anticipate and support emotion-related behaviours. For example, adrenaline is released in a fearful situation to help us run away from danger.
Before an emotion arises there is first a situation, which can be external: such as a spider creeping nearer, or internal: thinking that you are not good enough. This is then attended to – we focus on the situation – before we appraise it. Put simply, the situation is evaluated in terms of the meaning it holds for ourselves. This meaning then gives rise to an emotional response.
Psychologist and researcher James Gross, has described a set of five strategies that we all use to regulate our emotions and that may be used at different points in the emotion generation process:
1. Situation selection
This involves looking to the future and taking steps to make it more likely to end up in situations that gives rise to desirable emotions, or less likely to end up in situations that lead to undesirable emotions. For example, taking a longer but quieter route home from work to avoid road rage.
2. Situation modification
This strategy might be implemented when we are already in a situation, and refers to steps that might be taken to change or improve the situation’s emotional impact, such as agreeing to disagree when a conversation gets heated.
3. Attentional deployment
Ever distracted yourself in order to face a fear? This is “attentional deployment” and can be used to direct or focus attention on different aspects of a situation, or something else entirely. Someone scared of needles thinking of happy memories during a blood test, for example.
4. Cognitive change
This is about changing how we appraise something to change how we feel about it. One particular form of cognitive change is reappraisal, which involves thinking differently or thinking about the positive sides – such as reappraising the loss of a job as an exciting opportunity to try new things.
5. Response modulation
Response modulation happens late in the emotion generation process, and involves changing how we react or express an emotion, to decrease or increase its emotional impact – hiding anger at a colleague, for example.
How do our brains do it?
The mechanisms that underlie these strategies are distinct and exceptionally complex, involving psychological, cognitive and biological processes. The cognitive control of emotion involves an interaction between the brain’s ancient and subcortical emotion systems (such as the periaqueductal grey, hypothalamus and the amygdala), and the cognitive control systems of the prefrontal and cingulate cortex.
Take reappraisal, which is a type of cognitive change strategy. When we reappraise, cognitive control capacities that are supported by areas in the prefrontal cortex allow us to manage our feelings by changing the meaning of the situation. This leads to a decrease of activity in the subcortical emotion systems that lie deep within the brain. Not only this, but reappraisal also changes our physiology, by decreasing our heart rate and sweat response, and improves how we experience emotions. This goes to show that looking on the bright side really can make us feel better – but not everyone is able to do this.
Those with emotional disorders, such as depression, remain in difficult emotional states for prolonged durations and find it difficult to sustain positive feelings. It has been suggested that depressed individuals show abnormal activation patterns in the same cognitive control areas of the prefrontal cortex – and that the more depressed they are the less able they are to use reappraisal to regulate negative emotions.
However, though some may find reappraisal difficult, situation selection might be just a little easier. Whether it’s being in nature, talking to friends and family, lifting weights, cuddling your dog, or skydiving – doing the things that make you smile can help you see the positives in life. |
With the rise of global temperatures, climatologists predict a corresponding increase in the frequency and severity of wildfires in the Pacific Northwest. Rising temperatures are expected to create drier conditions in forests, thereby creating environmental conditions more prone to forest fires. Wildfires have become a common enough occurrence in the Pacific Northwest that summers have become synonymous with smoky conditions, but the issue is not constrained to this region. Though the Pacific Northwest has recently acted as a harbinger of increasing wildfires, environmental scientists forecast an increase in fire risk throughout the Western United States. The predicted rise in forest fire occurrence carries with it an increase in wildfire smoke for the surrounding areas, with winds carrying smoke far across state lines. These smoky conditions, in turn, are hazardous to health. State-level worksite regulations have proven ineffective at protecting workers from smoke-related health risks. Though wildfire smoke might currently appear as a predominantly Pacific Northwest issue, the Occupational Health and Safety Administration (OSHA) must implement its own federal-level regulations in order to fully protect workers.
"Proposed Federal OSHA Standards for Wildfire Smoke,"
Seattle Journal of Technology, Environmental & Innovation Law: Vol. 10
, Article 5.
Available at: https://digitalcommons.law.seattleu.edu/sjteil/vol10/iss1/5
Administrative Law Commons, Environmental Law Commons, Health Law and Policy Commons, Human Rights Law Commons, Immigration Law Commons, Labor and Employment Law Commons, Land Use Law Commons, Law and Race Commons, Legislation Commons, Natural Resources Law Commons, Oil, Gas, and Mineral Law Commons |
How Climate Change Will Alter Our Food
The world population is expected to grow to almost 10 billion by 2050. With 3.4 billion more mouths to feed, and the growing desire of the middle class for meat and dairy in developing countries, global demand for food could increase by between 59 and 98 percent. This means that agriculture around the world needs to step up production and increase yields. But scientists say that the impacts of climate change—higher temperatures, extreme weather, drought, increasing levels of carbon dioxide and sea level rise—threaten to decrease the quantity and jeopardize the quality of our food supplies.
A recent study of global vegetable and legume production concluded that if greenhouse gas emissions continue on their current trajectory, yields could fall by 35 percent by 2100 due to water scarcity and increased salinity and ozone.
Another new study found that U.S. production of corn (a.k.a. maize), much of which is used to feed livestock and make biofuel, could be cut in half by a 4˚C increase in global temperatures—which could happen by 2100 if we don’t reduce our greenhouse gas emissions. If we limit warming to under 2˚ C, the goal of the Paris climate accord, U.S. corn production could still decrease by about 18 percent. Researchers also found that the risk of the world’s top four corn exporters (U.S., Brazil, Argentina and the Ukraine) suffering simultaneous crop failures of 10 percent or more is about 7 percent with a 2˚C increase in temperature. If temperatures rise 4˚C, the odds shoot up to a staggering 86 percent.
“We’re most concerned about the sharply reduced yields,” said Peter de Menocal, Dean of Science at Columbia University and director of the Center for Climate and Life. “We already have trouble feeding the world and this additional impact on crop yields will impact the world’s poorest and amplify the rich/poor divide that already exists.”
But climate change will not only affect crops—it will also impact meat production, fisheries and other fundamental aspects of our food supply.
Eighty percent of the world’s crops are rainfed, so most farmers depend on the predictable weather agriculture has adapted to in order to produce their crops. However, climate change is altering rainfall patterns around the world.
When temperatures rise, the warmer air holds more moisture and can make precipitation more intense. Extreme precipitation events, which are becoming more common, can directly damage crops, resulting in decreased yields.
Flooding resulting from the growing intensity of tropical storms and sea level rise is also likely to increase with climate change, and can drown crops. Because floodwaters can transport sewage, manure or pollutants from roads, farms and lawns, more pathogens and toxins could find their way into our food.
Hotter weather will lead to faster evaporation, resulting in more droughts and water shortages—so there will be less water for irrigation just when it is needed most.
About 10 percent of the crops grown in the world’s major food production regions are irrigated with groundwater that is non-renewable. In other words, aquifers are being drained faster than they’re refilling—a problem which will only get worse as the world continues to heat up, explained Michael Puma, director of Columbia’s Center for Climate Systems Research.
This is happening in major food producing regions such as the U.S. Great Plains and California’s Central Valley, and in Pakistan, India, northeastern China, and parts of Iran and Iraq.
“Groundwater depletion is a slow-building pressure on our food system,” Puma said. “And we don’t have any effective policies in place to deal with the fact that we are depleting our major resources in our major food producing regions, which is pretty disconcerting.”
Climate projections show that droughts will become more common in much of the U.S., especially the southwest. In other parts of the world, drought and water shortages are expected to affect the production of rice, which is a staple food for more than half of the people on Earth. During severe drought years, rainfed rice yields have decreased 17 to 40 percent. In South and Southeast Asia, 23 million hectares of rainfed rice production areas are already subject to water scarcity, and recurring drought affects almost 80 percent of the rainfed rice growing areas of Africa.
Extreme weather, including heavy storms and drought, can also disrupt food transport. Unless food is stored properly, this could increase the risk of spoilage and contamination and result in more food-borne illness. A severe summer drought in 2012 reduced shipping traffic on the Mississippi River, a major route for transporting crops from the Midwest. The decrease in barge traffic resulted in significant food and economic losses. Flooding which followed in the spring caused additional delays in food transport.
Global warming may benefit certain crops, such as potatoes in Northern Europe and rice in West Africa, and enable some farmers to grow new crops that only thrive in warmer areas today. In other cases, climate change could make it impossible for farmers to raise their traditional crops; ideal growing conditions may shift to higher latitudes, where the terrain or soil may not be as fertile, resulting in less land available for productive agriculture.
The ultimate effect of rising heat depends on each crop’s optimal range of temperatures for growth and reproduction. If temperatures exceed this range, yields will drop because heat stress can disrupt a plant’s pollination, flowering, root development and growth stages.
According to a 2011 National Academy of Sciences report, for every degree Celsius that the global thermostat rises, there will be a 5 to 15 percent decrease in overall crop production.
Heat waves, which are expected to become more frequent, make livestock less fertile and more vulnerable to disease. Dairy cows are especially sensitive to heat, so milk production could decline.
Parasites and diseases that target livestock thrive in warm, moist conditions. This could result in livestock farmers treating parasites and animal diseases by using more chemicals and veterinary medicines, which might then enter the food chain.
Climate change will also enable weeds, pests and fungi to expand their range and numbers. In addition, earlier springs and milder winters will allow more of these pests and weeds to survive for a longer time.
Plant diseases and pests that are new to an area could destroy crops that haven’t had time to evolve defenses against them. For example, new virulent mutant strains of wheat rust, a fungal infection that had not been seen for over 50 years, have spread from Africa to Asia, the Middle East and Europe, devastating crops.
Higher levels of carbon dioxide
Because plants use carbon dioxide to make their food, more CO2 in the atmosphere can enhance crop yields in some areas if other conditions—nutrient amounts, soil moisture and water availability—are right. But the beneficial effects of rising carbon dioxide levels on plant growth can be offset by extreme weather, drought or heat stress.
While higher CO2 levels can stimulate plant growth and increase the amount of carbohydrates the plant produces, this comes at the expense of protein, vitamin and mineral content. Researchers found that plants’ protein content will likely decrease significantly if carbon dioxide levels reach 540 to 960 parts per million, which we are projected to reach by 2100. (We are currently at 409 ppm.) Studies show that barley, wheat, potatoes and rice have 6 to 15 percent lower concentrations of protein when grown at those levels of CO2. The protein content of corn and sorghum, however, did not decline significantly.
Moreover, the concentrations of important elements—such as iron, zinc, calcium, magnesium, copper, sulfur, phosphorus and nitrogen—are expected to decrease with more CO2 in the atmosphere. When CO2 levels rise, the openings in plant shoots and leaves shrink, so they lose less water. Research suggests that as plants lose water more slowly, their circulation slows down, and they draw in less nitrogen and minerals from the soil. Vitamin B levels in crops may drop as well because nitrogen in plants is critical for producing these vitamins. In one study, rice grown with elevated CO2 concentrations contained 17 percent less vitamin B1 (thiamine), 17 percent less vitamin B2 (riboflavin), 13 percent less vitamin B5 (pantothenic acid), and 30 percent less vitamin B9 (folate) than rice grown under current CO2 levels.
A warmer, more acidic ocean
540 million people around the world rely on fish for their protein and income—but seafood will be impacted by climate change, too. Since 1955, the oceans have absorbed over 90 percent of the excess heat trapped by greenhouse gas emissions in the atmosphere. As a result, the ocean is warmer today than it’s ever been since recordkeeping began in 1880.
As the oceans heat up, many fish and shellfish are moving north in search of cooler waters.
Off the U.S. northeastern coast, American lobster, red hake and black sea bass have shifted their range an average of 119 miles northward since the late 1960s. In Portugal, fishermen have recently caught 20 new species, most of which migrated from warmer waters. And Chinook salmon, usually found around California and Oregon, are now entering Arctic rivers. Moving into new territory, however, these species may face competition with other species over food, which can affect their survival rates. The range shifts are affecting fishermen, too, who must choose whether to follow the fish they’re used to catching as they move north or fish different species. As these ecosystems change, fishing regulations are having a hard time keeping up, jeopardizing the livelihoods of fishermen whose quotas for certain species of fish may no longer be relevant.
Warmer waters can alter the timing of fish migration and reproduction, and could speed up fish metabolism, resulting in their bodies taking up more mercury. (Mercury pollution, from the burning of fossil fuels, ends up in the ocean and builds up in marine creatures.) When humans eat fish, they ingest the mercury, which can have toxic effects on human health.
Higher water temperatures increase the incidence of pathogens and of marine diseases in species such as oysters, salmon and abalone. Vibrio bacteria, which can contaminate shellfish and, when ingested by humans, cause diarrhea, fever and liver disease, are more prevalent when sea surface temperatures rise, too.
In addition to heating up, the ocean has taken up almost a third of the carbon dioxide that humans have generated, which has changed its chemistry. Seawater is now 30 percent more acidic than it was during the Industrial Revolution.
As ocean acidity increases, there are fewer carbonate ions in the ocean for the marine species that need calcium carbonate to build their shells and skeletons. Some shellfish, such as mussels and pterapods (tiny marine snails at the base of the food chain) are already beginning to create thinner shells, leaving them more vulnerable to predators. Ocean acidification can also interfere with the development of fish larvae and disrupt the sense of smell fish rely on to find food, habitats and avoid predators. In addition, It disturbs the ecosystems that marine life depends upon.
According to research being done at Columbia’s Center for Climate and Life, ocean warming and acidification may end up restructuring microbial communities in the ocean. Because these sensitive microbes are the basis for the global food chain, what happens to them could have unforeseen and huge impacts on our food supplies.
Sea level rise
Some experts predict that sea levels could rise one meter by 2100 due to melting polar ice caps and glaciers. In Asia, where much of the rice is grown in coastal areas and low-lying deltas, rising seas will likely disrupt rice production, and saltwater that moves further inland could reduce yields.
Aquaculture of fresh water species is also affected by sea level rise as saltwater can move upstream in rivers. For example, in the Mekong Delta and Irawaddy region of Vietnam and Myanmar, the booming catfish aquaculture could be affected by saltwater intrusion. If this occurs, fish farms would have to be moved further upstream because catfish have little tolerance for saline conditions.
Who will feel the effects?
Climate change will not only affect food production and consumers; as optimal growing conditions shift with the climate, communities that depend on fishing or farming for their livelihoods will be disrupted.
Some higher latitude areas may benefit and become more productive, but if emissions continue to rise, the outlook for food production from 2050 to 2100 is not good. Wealthy nations and temperate regions will probably be able to withstand most of the impacts, whereas tropical regions and poor populations will face the most risks. Children, pregnant women, the elderly, low-income communities and those with weakened immune systems or chronic medical conditions will be most susceptible to the changes in food access, safety and nutrition.
In addition, because food is a globally traded commodity today, climate events in one region could raise prices and cause shortages across the globe. Starting in 2006, drought in major wheat producing countries was a key factor in a dramatic spike in food prices. Many countries experienced food riots and political unrest.
How science can help head off impacts
“Food security is going to be one of the most pressing climate-related issues, mainly because most of the world is relatively poor and food is going to become increasingly scarce and expensive,” said de Menocal. “So what kind of solutions can science provide to help?”
Of course, the best way to reduce these risks to our food supply is to implement policies to cut greenhouse gas emissions. Earth Institute researchers, however, are working on some ambitious and potentially far-reaching projects to reduce risks to the food system.
Columbia’s International Research Institute for Climate and Society is leading a project called Adapting Agriculture to Climate Today, for Tomorrow, or ACToday. Part of Columbia World Projects, ACToday will help to maximize food production and reduce crop losses by more precisely predicting and managing flood and drought risk, improving financial practices, and, when a food crisis unfolds, identifying the need for relief efforts earlier. The project introduces state-of-the-art climate information and prediction tools in six countries: Ethiopia, Senegal, Colombia, Guatemala, Bangladesh and Vietnam.
In case of a significant disruption in the global food system, there is no agency within the U.S. government whose responsibility it is to take charge, said Puma. His focus has been on trying to understand potential disruptions, which could be related to extreme weather, the power grid, conflict, or other factors. “We want to understand the food system in greater depth so we can identify vulnerabilities and adjust the system to deal with those,” he said. Working with colleagues at the Potsdam Institute for Climate Impact Research in Germany, he is building quantitative economic models to examine vulnerabilities in the food system under different scenarios; they will use the tool to explore how altering certain policies might reduce the vulnerabilities of the food system to disruptions.
The Center for Climate and Life is putting its efforts into building bridges between the business community and the science community in New York, to help clarify for investors the financial risks and opportunities of climate change. Large investment firms with long-term views have trillions of dollars in assets that could be jeopardized by climate change. De Menocal believes more intelligent investment strategies can be pursued with a science-based approach. “If you engage the largest deployments of money on the planet, that’s what’s going to shape behavior,” he said. “If we can educate them about how climate change will impact things that matter to people, then they can act on that knowledge in advance of these things happening.” |
Reading to your child is a great way to stimulate language and to bond with your child. Reading books aloud to children stimulates their minds and expands their vocabulary. It will assist in developing expressive language skills, listening skills, attention skills and receptive language skills.
Children can benefit from book-reading even before they are speaking. For younger children, select books with minimal text and big, bright and vibrant pictures. By the age of 6-12 months, babies will enjoy books with reflective objects, shapes and colors. At this age, books should also contain meaningful age-appropriate words such as “mommy,” “daddy,” “bye-bye,” or “milk.”
As children get older and are ready for books with more language, you can look for books with predictable text or repeated lines. Books such as Brown Bear, Brown Bear and There Was an Old Lady Who Swallowed a Fly are good examples of these.
Children ages 1 -2 years enjoy books with 1-2 simple sentences on each page and any type of animal or vehicle sounds. At this age, children can learn to point to pictures in books (e.g. “Where is the dog?,” “Where is baby’s nose?”). Parents can begin to ask simple questions to children while reading books (e.g. “What does a cow say?,” “What’s baby doing?”). Expand these activities by having the child imitate actions in the pictures, or by repeating what they say back to them, with another word or two added for language development.
It’s never too early to begin to read to your child! Grab a book and cuddle up with your little one.
Looking for some recommendations? Here are some favorite books from a speech-language pathologist’s tool bag:
- Moo, Baa, LaLaLa by Sandra Boynton
- Brown Bear, Brown Bear by Eric Carle
- In the Tall, Tall Grass by Denise Fleming
- Cookie’s Week by Cindy Ward
- Dear Zoo by Rod Campbell
- Peek-a-Who? by Nina Laden
- Chicka Chicka Boom Boom by Bill Martin
- Slide and Find –Trucks by Roger Priddy
- Where’s Spot? by Eric Hill
- The Mitten by Jan Brett
- Go Way, Big Green Monster by Ed Emberley
- The Pout-Pout Fish by Deborah Diesen
Call us for more details: In Sioux Falls, 605-444-9700. In Sioux City, 712-226-ABLE (2253). In Rapid City, 605-791-7400.
Learn more about our therapy services here.
-Carrie Vermeer M.A., CCC-SLP, LifeScape |
For more than a decade, scientists have been using genetic technology to produce biologically identical copies, or clones, of animals. In theory, cloning can be used to improve sheep and cattle breeds by ensuring that the animals' most desirable genetic characteristics are passed on. But in practice, cloning has often proved disappointing: scientists have been limited in the number of clones they could produce, and the young animals frequently have a low survival rate. Now, scientists at the Roslin Institute near Edinburgh have demonstrated a dramatically different kind of cloning technology. Starting with cells from a sheep embryo, they grew thousands of copies in a culture. Technicians then fused the cells to unfertilized eggs and implanted the eggs in female sheep. In the end, only a handful of cloned Welsh Mountain lambs were born. But members of the Roslin team said that when the new technique is perfected, it should be possible to create thousands of identical sheep and cattle at a time. "This is very exciting," said Prof. Allan King, an embryologist and geneticist at Ontario's University of Guelph. "It has big implications for livestock breeding and production."
The experiment, described in the March 7 issue of the British scientific journal Nature, suggested that the new technology could be used someday to create cattle with leaner meat and cows that produce low-fat milk. Keith Campbell, the cell biologist in charge of the experiment, said that kind of genetic fine-tuning could become possible because, unlike existing methods, the new cloning system would enable scientists "to make much more precise genetic changes in the cells used to produce cloned animals."
The Roslin scientists scored an unexpected triumph when they achieved a type of cloning that had defeated past attempts by American and European scientists. The method differs from existing cloning technology in several important ways. In conventional sheep cloning, technicians usually remove embryos, consisting of between 50 and 60 cells, from artificially inseminated ewes, divide the cells into two clusters and re-insert these into recipient ewes.
The Roslin team started with slightly more mature embryonic cells, which were then grown in a culture where they multiplied rapidly - providing a far higher number of potential clones than usual. According to Campbell, the cells' high rate of growth may have been induced when scientists withdrew some of the nutrients in the culture. "This put the cells into a quiescent state," said Campbell, "which may have made them more suitable for controlling development into a fetus."
The use of a culture for growing cells should also make finer genetic tuning possible. In the past, scientists have tried to inject new genes into an embryo before cloning - an approach, said Roslin team member Ian Wilmut, that "is very primitive, like firing a shotgun." Using a culture, added Campbell, "we should be able to make much more precise genetic changes, and then use only the altered cells to produce new animals."
Inevitably, advances in animal cloning raise the prospect of scientists applying the same techniques to humans. Doctors at George Washington Medical Center in Washington did just that in 1993 when they produced 48 short-lived clones of human embryos. The controversial experiment, which was made public at a scientific conference held in Montreal, triggered a fierce controversy. Since then, some industrialized nations, including Canada, have issued guidelines against the cloning of human embryos. Meanwhile, the survival rate among cloned animals remains low. Of the five Roslin lambs born in Scotland last July, only two survived infancy and were still living last week as their story was told to the world.
Maclean's March 18, 1996 |
Australians who were exposed to high levels of lead as children may be at greater risk of committing violent and impulsive crimes two decades later, our yet-to-be-published research suggests.
The origins of criminal behaviour have previously been attributed to a perpetrator’s genetic make up or how they were raised. But we’re increasingly realising that the child’s physical and chemical environment plays a significant role in criminal behaviours later in life.
How are young people exposed to lead?
Lead exposure from soils and dusts in Australian communities is dominated by three sources: mining and smelting emissions, leaded paint, and leaded petrol.
In mining or smelting cities such as Broken Hill (NSW), Boolaroo (Lake Macquarie, NSW), Port Kembla (NSW), Port Pirie (South Australia) and Mount Isa (Queensland), lead contamination can come from smelter fallout or dust from spoil heaps, tailings and ore processing that is dispersed across the environment.
Leaded petrol formed a significant source of lead exposure in cities when it was sold in Australia from 1932 until 2002. In the two national assessments of petrol lead emissions in Australian capital cities, 3,842 tonnes of lead were emitted in 1976 and 2,388 tonnes of lead were released in 1985, despite reductions in the allowable lead concentration in petrol.
Blood lead levels have fallen since the final removal of lead from petrol in 2002 as well as with the reduction of allowable lead in paint to 0.1% in 1997.
But the legacy from earlier emissions remains, with an estimated 100,000 Australian children having blood lead levels high enough to cause health problems.
Health harms from lead
Lead is a neurotoxin, which means that when it is absorbed, inhaled or ingested, it can affect the development of the child’s nervous system.
High blood lead levels have been linked with decreased IQ and academic achievement, and other learning difficulties. Children who were exposed to lead in Massachusetts in the 1990s, for instance, were likely to perform more poorly than their peers on standardised tests, even after controlling for community and school characteristics.
Similarly, the UK Avon Longitudinal Study found higher childhood blood lead levels were associated with decrements in reading, writing and spelling grades on standard assessment tests.
Most recently, a 2012 study of multiple metal exposures in New Orleans showed that elevated soil metals (including lead) reduced and compressed student elementary school scores.
Lead exposure and behavioural problems
Elevated blood lead levels are also risk factors for a range of social and behavioural problems, such as attention-deficit hyperactivity disorder (ADHD), oppositional/conduct disorders, and delinquency.
Specifically, the US National Toxicological Report on the Health Effects of Low-level Lead concluded that children with blood lead levels of up to 5μg/dL (micrograms per decilitre) were more likely to have attention-related and antisocial behavioural problems.
A 2002 study which compared 194 delinquent children with 146 controls found that delinquents were four times more likely to have been lead poisoned (with a blood lead level greater than 10 μg/dL) than the control group. These patterns remained even after controlling for a range of factors relevant to their socioeconomic status.
In 2008, US researchers showed that children with elevated blood lead concentrations at six years of age had a 50% greater risk of being arrested for violent crime as a young adult. Arrest rates involving violent crimes were shown to increase for each 5 μg/dL increment increase in blood lead.
Controlled studies of lead-exposed rodents confirm that low doses of lead result in enhanced aggressive behaviour as the animals mature, confirming that early life environmental lead exposure in humans may contribute to adulthood criminal activities.
Lead exposure and crime in Australia
Using data from the NSW Bureau of Crime Statistics and Research and the NSW Environment Protection Authority (EPA), we examined the correlations between lead-in-air emissions and crime rates with 20- and 21-year time lags at seven sites in NSW.
All seven sites showed that higher levels of airborne lead resulted in higher assault rates 20 to 21 years later. Areas with higher lead levels tended to show stronger relationships.
Source: MacQuarie Univeristy |
N7.1.d Explain the result of dividing a quantity of zero by a non-zero quantity.
N7.1.e Explain (by generalizing patterns, analogies, and mathematical reasoning) why division of non-zero quantities by zero is not defined.
N7.2 Expand and demonstrate understanding of the addition, subtraction, multiplication, and division of decimals to greater numbers of decimal places, and the order of operations.
N7.2.a Provide a justification for the placement of a decimal in a sum or difference of decimals up to thousandths (e.g., for 4.5 + 0.73 + 256.458, think 4 + 256 so the sum is greater than 260; thus, the decimal will be placed so that the sum is in the hundreds).
N7.2.b Provide a justification for the placement of a decimal in a product (e.g., for $12.33 × 2.4, think $12 × 2, so the product is greater than $24; thus, the decimal in the final product would be placed so that the answer is in the tens).
N7.2.c Provide a justification for the placement of a decimal in a quotient (e.g., for 51.50 m ÷ 2.1, think 50 m ÷ 2 so the quotient is approximately 25 m; thus, the final answer will be in the tens). (Note: If the divisor has more than one digit, students should be allowed to use technology to determine the final answer.)
N7.3 Demonstrate an understanding of the relationships between positive decimals, positive fractions (including mixed numbers, proper fractions and improper fractions), and whole numbers.
N7.3.a Predict the decimal representation of a fraction based upon patterns and justify the reasoning (e.g., knowing the decimal equivalent of 1/8 and 2/8, predict and verify the decimal representation of 7/8).
N7.5 Develop and demonstrate an understanding of adding and subtracting positive fractions and mixed numbers, with like and unlike denominators, concretely, pictorially, and symbolically (limited to positive sums and differences).
N7.5.a Estimate the sum or difference of positive fractions and/or mixed numbers and explain the reasoning.
N7.6.b Explain, using concrete materials such as integer tiles and diagrams, that the sum of opposite integers is zero (e.g., a move in one direction followed by an equivalent move in the opposite direction results in no net change in position).
P7.3 Demonstrate an understanding of one-and two-step linear equations of the form ax/b + c = d (where a, b, c, and d are whole numbers, c less than or equal to d and b does not equal 0) by modeling the solution of the equations concretely, pictorially, physically, and symbolically and explaining the solution in terms of the preservation of equality.
P7.3.a Model the preservation of equality for each of the four operations using concrete materials or using pictorial representations, explain the process orally and record it symbolically.
P7.4 Demonstrate an understanding of linear equations of the form x + a = b (where a and b are integers) by modeling problems as a linear equation and solving the problems concretely, pictorially, and symbolically.
P7.4.a Represent a problem with a linear equation of the form x + a = b where a and b are integers and solve the equation using concrete models (e.g., counters, integer tiles) and record the process symbolically. |
The human body is made unbelievable and amazing. The whole structure of skeletal system of human being is consists of bones, cartilage, ligaments and tendons and accounts for about 20 percent of the body load. A cool reality about bones is that they are a lot stronger then you probably think, four times stronger than concrete. Made with inflexible and complicated framework bones are known as the skeleton that maintain and protect other soft organs of the body. There are a total of 206 bones in the body, most of which are located in the feet and hands, which leave a large part of the body for the remaining bones.
The bones which are longer than they are wide are known as long bones .The bones that make up the human skeleton are essential to support and protect the vital organs as well as providing anchor points for muscles. There are many different types of bones in the human skeleton, from the very small to the largest. Here we are now discussing about the longest bone in the human body.
Generally divided into five types of bones as long, short, flat, irregular and sesamoid, the longest bones named as the femur and tibia which are taking much load during daily activities and they are essential for skeletal mobility. The longest bones consist of the femora, tibiae, and fibulae of the legs; the humeri, radii, and ulnae of the arms; metacarpals and metatarsals of the hands and feet, the phalanges of the fingers and toes, and the clavicles or collar bones. The longest bone in the human body which is located in the upper leg known as femur bone is actually “thigh bone” and is so long that one end forms part of the hip and the other a part of the knee. On average the femur is about 48 cm long and makes up about a quarter of a person’s height. It is considered as to be the strongest bone in the human body and would be able to support about 30 times the weight of an average grown-up.
We have some appealing large bones throughout the rest of us, so let’s get started,
Here are the ten Longest Bones in the Human Body given below:
Femur: This is the only bone in the thigh and by far the longest bone in the human body almost by 3 inches; it measures in at 19.9 inches. The Femur also known as the thigh bone which is pretty impressive, you can watch this bone by looking at your thigh in comparison to the rest of your body and it’s not hard to notice that it would in reality be the longest bone in human body. The Femur in reality runs the distance from your hip down to around your knees.
Tibia: The Tibia is the second longest bone in your body measuring in at 16.9 inches, normally known as the shin bone. This is one of the bones that you can really feel with your hand, rub your fingers along your shine and you should be able to feel the whole thing.
Fibula: The Fibula is the third longest bone in our body, measuring in at 15.9 inches long. The Fibula also known as the lower leg bone that you can actually sense these bones with your hands because this bone goes from your knee down to your ankle area.
Humerus: The Humerus is the fourth largest bone in the body, measuring in at 14.4 inches long and also known as the upper arm bone, this bone goes from the shoulder to elbow.
Ulna: This is the fifth largest bone in your body known as the Ulna is measured at 11.1 inches long. Well known as lower arm bone this Ulna runs from your elbow.
Radius: The radius otherwise known as radial bone is one of the two large bones of the forearm, the other being the ulna. It extends from the side of the elbow to the thumb of the wrist and runs parallel to the ulna, which exceeds it in length and size.
7th rib: Well known for the long curved bones, these Ribs are which are surrounding the chest, allowing the lungs to expand and thus facilitate breathing by expanding the chest cavity. They provide to protect the lungs, heart, and other internal organs of the thorax. Humans have 24 ribs .The first seven sets of ribs, known as “factual ribs”. Usually, increase in length from 1 through 7 and decrease again through rib 12.
8th rib: These bones are also part of the long bones of human body. Its average length is 9.06 in / 23.00 cm
Innominate bone: This bone is a big flat as well as tightened in the center and enlarged above and below which forms the main correlation between the lower limb and the axial skeleton also known as hip bone or coxal bone. The hip bone is formed by three bones; ilium, ischium, and pubis.
Sternum: Shaped like a capital “T” the sternum or breastbone is a long flat bone located anteriorly to the heart in the center of the chest. It connects to the rib bones forming the anterior section of the rib cage with them, and as a result helps to protect the lungs, heart and major blood vessels from physical trauma.
Longest Bones are a very important role in our body functions; it takes long time to heal injuring any of these bones can be extremely painful .So every human being should take care of their bones. Human body muscles attached to these long bones giving much more strength to perform everyday work and other activities. It plays an important part in the overall function of our body. Without bones our body cannot move and function properly. By doing exercise and by consuming a lot of milk, we must keep them strong and healthy as it contains more calcium in comparison to any other organ of human body. |
While that may be the question on the minds of a lot of moms and dads, the answer has little to do with the type of math and more to do with the way it's being taught. In recent years, a "discovery" approach to learning math has become the trend in Canadian schools. In general, this approach involves problem-solving situations that are designed to encourage a student to discover the answers without direct instruction from the teacher.
The goal of discovery-based math is to make it more meaningful for children. If a child can make the correct conclusions on her own through logical thinking and deduction, the concepts will be easier to remember. However, in a recent study released by the Frontier Centre for Public Policy, the following conclusion was made: "traditional math education methods are superior to the highly ineffective, discovery-based instructional techniques that are in vogue now in educational curricula." It seems that, in our efforts to find innovative ways to make math more interesting and less intimidating, we have left many children—and their parents—more confused than ever.
Many classroom teachers would agree that self-discovery has value but also recognize that a more traditional approach to teaching math shouldn't be overlooked. Math proficiency, by its very nature, requires old-fashioned memorization, repetition, and practice in order to master basic, foundational concepts. In fact, in places like Korea and Japan, where students are well known for their math skills, it is this traditional approach to math that is used within the classrooms.
Even if instructional methods in our schools change very little in the short-term, parents can help their kids become better math students. There are numerous resources, such as the JUMP at Home math workbooks that break down math problems into smaller steps. While these workbooks and others like it take a more traditional approach to teaching math, they provide an excellent and easy-to-use resource for parents.
Discovery-based learning does have value in a child's education. However, before a young student can engage in this type of high-order thinking with any degree of confidence, he or she must be able to master the foundational concepts. The reality is, without direct instruction, memorization, and practice the benefits of discover-based learning can become completely lost. |
Immunization is a safe, effective and simple way to prevent life threatening illnesses not only in children but also in adults. Vaccines are some of the safest medicines available which can relieve suffering costs related to these preventable diseases. The reason for underutilization of vaccines in adults are 1) Low prioritization of the importance of vaccines preventable diseases among adults 2) Uncertainty or lack of knowledge about the safety and efficacy 3) Lack of universal recommendations for all adults and 4) Financial constraints, especially in developing countries. Adult immunizations are administered in primary series like previously immunized, booster doses and periodic doses. Agents include Toxoids (Diphtheria and Tetanus), Live Virus Vaccines (Measles, Mumps and Rubella), and inactivated virus vaccines (Influenza), Inactive viral particles (Hepatitis B), inactivated bacterial polysaccharide vaccine (Pneumococcal) and Conjugate / Polysaccharide vaccine (Meningococcal). And also vaccines like Hepatitis A, Polio and Varicella may be recommended in some. Since the economy and literacy rate has shown a steady rise in the South Asia and people are being aware of different health problems through the recently advanced global communication, the education and awareness for immunization not only in children but also in adults need a special consideration. Keeping in view the statistical data of suffering costs related to the non-utilization of immunization in adults, the need of hour has come for utilization of immunization to emphasize its importance.
|Number of pages||6|
|Journal||Kathmandu University Medical Journal|
|Publication status||Published - 04-08-2008|
All Science Journal Classification (ASJC) codes |
What is Hodgkin lymphoma?
Hodgkin lymphoma is a type of cancer in the lymphatic system. The lymphatic system is part of the immune system and functions to fight disease and infections. The lymphatic system also helps maintain the fluid balance in different parts of the body by bringing excess fluid back into the bloodstream.
The lymphatic system includes the following:
Lymph. Fluid containing white blood cells, especially those called lymphocytes.
Lymph vessels. Thin tubes that carry lymph fluid throughout the body.
Lymphocytes. White blood cells that fight infection and disease.
Lymph nodes. Bean-shaped organs, found in the underarm, groin, neck, chest, abdomen, and other parts of the body, that act as filters for the lymph fluid as it circulates through the body.
Hodgkin lymphoma causes the cells in the lymphatic system to abnormally reproduce, eventually making the body less able to fight infection and causing the lymph nodes to swell. Hodgkin lymphoma cells can also spread (metastasize) to other organs and tissue. It is a rare disease in children, accounting for about 4 percent of all cases of childhood cancer in the U.S. Hodgkin lymphoma occurs most often in people between the ages of 15 and 40, and in people over age 55. About 10 to 15 percent of cases are found in children and teenagers. The disease, for unknown reasons, affects males more often than females.
What causes Hodgkin lymphoma?
The specific cause of Hodgkin lymphoma is unknown. It is possible that a genetic predisposition and exposure to viral infections may increase the risk for developing Hodgkin lymphoma. There is a slightly increased chance for Hodgkin lymphoma to occur in siblings of patients.
There has been much investigation into the association of the Epstein-Barr virus (EBV), which causes the infection mononucleosis. This virus has been correlated with a greater incidence of children diagnosed with Hodgkin lymphoma, although the direct link is unknown.
There are many individuals, however, who have infections related to EBV who do not develop Hodgkin disease.
What are the symptoms of Hodgkin lymphoma?
The following are the most common symptoms of Hodgkin lymphoma. However, each child may experience symptoms differently. Symptoms may include:
Painless swelling of the lymph nodes in the neck, underarm, groin, and/or chest
Difficulty breathing (dyspnea), coughing, or chest pain due to enlarged nodes in the chest
Tiring easily (fatigue)
Weight loss/decreased appetite
Itching skin (pruritus)
Frequent viral infections (for example, cold, flu, sinus infection)
The symptoms of Hodgkin lymphoma may resemble other blood disorders or medical problems. Always consult your child's doctor for a diagnosis.
How is Hodgkin lymphoma diagnosed?
In addition to a complete medical history and physical examination, diagnostic procedures for Hodgkin lymphoma may include:
Blood and urine tests
X-rays of the chest. A diagnostic test that uses invisible electromagnetic energy beams to produce images of internal tissues, bones, and organs on film.
Lymph node biopsy. A sample of tissue is removed from the lymph node and examined under a microscope. A biopsy is needed to confirm the diagnosis of Hodgkin disease and to tell what type it is.
Computed tomography scan of the abdomen, chest, and pelvis (also called a CT or CAT scan). A diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones, muscles, fat, and organs. CT scans are more detailed than general X-rays.
Magnetic resonance imaging (MRI). A diagnostic procedure that uses a combination of large magnets, radiofrequencies, and a computer to produce detailed images of organs and structures within the body. This test is not often used for Hodgkin disease unless the doctor is concerned it may have spread to the brain or spinal cord.
Positron emission tomography (PET) scan. A type of nuclear medicine procedure. For this test, a radioactive sugar is injected into the bloodstream. Because cancer cells use more of the sugar than normal cells, the radioactivity tends to collect in them, and can be detected with a special camera. A PET scan image is not finely detailed like a CT scan, but it can sometimes spot cancer cells in different areas of the body even when they cannot be seen by other tests. This test is often used in combination with a CT scan.
Bone marrow aspiration and/or biopsy. A procedure that involves taking a small amount of bone marrow fluid (aspiration) and/or solid bone marrow tissue (called a core biopsy), usually from the hip bones, to be examined for the number, size, and maturity of blood cells and/or abnormal cells. This test may be used to see if cancer cells have reached the bone marrow.
How is Hodgkin lymphoma staged?
Staging is the process of determining whether cancer has spread and, if so, how far. There are various staging systems that are used for Hodgkin lymphoma. Always consult your child's doctor for information on staging. One method of staging Hodgkin lymphoma is the following:
Stage I. Usually involves a single lymph node region or organ.
Stage II. Involves two or more lymph node regions on the same side of the body (above or below the diaphragm, the thin muscle that separates the chest from the abdomen), or the cancer has spread from one lymph node into a nearby organ.
Stage III. Involves lymph node regions on both sides of the body and is further classified depending on the organs and areas involved.
Stage IV. Involves wide spread of the disease in other areas outside the lymphatic system (metastasis), in addition to the lymphatic system involvement.
Stages are also noted by the presence or absence of certain symptoms of the disease:
Asymptomatic (A). No fever, night sweats, or weight loss.
Symptomatic (B). Symptoms include fever, night sweats, or weight loss.
For example, stage IIIB is disease that is symptomatic, involves lymph node regions on both sides of the body, and is further classified depending on the organs and areas involved.
What is the treatment for Hodgkin lymphoma?
Specific treatment for Hodgkin lymphoma will be determined by your child's doctor based on:
Your child's age, overall health, and medical history
Extent/stage and location of the disease
Type of Hodgkin disease
Your child's tolerance for specific medications, procedures, or therapies
Expectations for the course of the disease
Your opinion or preference
Treatment may include (alone or in combination):
Bone marrow/stem cell transplant
Supportive care (for pain, fever, infection, and nausea/vomiting)
Continued follow-up care (to determine response to treatment, detect recurrent disease, and manage side effects of treatment)
Aggressive therapy, while increasing long-term survival, also carries some serious side effects. Discuss with your child's doctor a complete list of known side effects for treatment plans and therapies.
What is the long-term outlook for a child with Hodgkin lymphoma?
Prognosis greatly depends on:
The extent of the disease.
Presence or absence of metastasis.
The response to therapy.
Age and overall health of the child.
Your child's tolerance of specific medications, procedures, or therapies.
New developments in treatment.
As with any cancer, prognosis and long-term survival can vary greatly from child to child. Every child is unique and treatment and prognosis are structured around the child. Prompt medical attention and treatment are important for the best prognosis. Continuous follow-up care after treatment is essential for the child diagnosed with Hodgkin lymphoma. Side effects of radiation and chemotherapy, including second cancers, can occur in survivors of Hodgkin lymphoma. New methods are continually being discovered to improve treatment and to decrease side effects. |
What Do Starlings Eat?
The primary food of starlings is insects and other invertebrates, such as flies, beetles, grasshoppers, spiders, earthworms, caterpillars, snails, millipedes and insect larvae. They also eat fruits, berries, seeds, grains and other plant matter. Additionally, when dumpsters and trash bins are left open, they feed on garbage.
In the late 19th century, European starlings were let loose into Central Park in New York by Shakespeare enthusiasts intent on introducing all of the animals mentioned in his works. They quickly multiplied and spread across North and Central America. As of 2014, there are about 150 million starlings in the United States. They are social birds and are commonly seen foraging in flocks on lawns, pastures, farms, golf courses and other open areas. They nest in any open cavity they can find, including buildings, utility poles and trees.
Many people consider starlings to be an invasive species because they compete with native birds for habitat, scatter garbage while foraging and leave deposits of droppings beneath nesting areas. In some cities and towns, large roosts of starlings also create considerable noise. The American Humane Society recommends that, rather than kill intrusive starlings, homeowners should close off cavities that could provide possible nesting sites and keep rubbish containers tightly sealed. |
Blotting is a technique for detecting DNA, RNA or proteins initially present in a complex mixture. In this technique the molecules separated by gel electrophoresis are transferred to nitrocellulose filter. Three types of blotting techniques are commonly used for visualizing particular macromolecules:
The procedure of Southern blotting was first developed by Edward Southern and was named after his name. Southern blotting is used to detect DNA from a complex mixture.
First the DNA is extracted from the cell. The extracted DNA is than cut up with restriction enzymes. The resulting DNA fragments are separated by gel electrophoresis, in which the molecules spread out from one end of the gel to the other. For blotting technique the agarose gel is first washed with buffer solution to remove the accumulated contaminant or agarose residues. Gel is carefully removed from the glass plate and is placed in the transfer buffer. In the mean time nitrocellulose membrane and blotting papers are cut according to size of gel. Finally nitrocellulose membrane is placed on the gel above which blotting papers dipped in transfer buffer are placed. Five to six pieces of blotting papers are placed on each side. Gel with nitrocellulose membrane and blotting papers is then rolled with glass rod to remove any trapped air. Now stacks of unsoaked blotting papers are placed above it. Finally glass plate is placed above them. This set up is transferred to reservoir containing transfer buffer. The transfer buffer from the reservoir moves to the blotting paper by capillary action. The set up is allowed to stand for 12 to 13 hours. In this time the DNA from the gel is transferred to the nitrocellulose membrane. This immobilization is because of the negative charge of DNA and positive charge of nitrocellulose membrane. As soon as the transfer is finished, nitrocellulose membrane is carefully taken out from the setup. Now the membrane will have DNA fragments in the same pattern as they were on the gel. All the bands present in the membrane are marked with pen or pencil. Membrane is than exposed to ultraviolet rays or at 80⁰C in a vacuum oven. This is done to strengthen the binding of DNA fragments to nitrocellulose membrane. This will further not allow DNA to wash off while membrane washing. The membrane can now be incubated with the specific probe of interest. Once the incubation is complete, it can be detected by autoradiography.
The procedure of Northern blotting is exactly similar to Southern blotting. The only major difference between the two is that in Northern blotting RNA is transferred onto the membrane from the gel.
The procedure of western blotting is also similar to Southern blotting. But in western blotting proteins are separated instead of DNA. In Western Blotting there is no need to expose nitrocellulose membrane to ultraviolet ray or vacuum. Instead antibodies are used to detect the presence of proteins. Primary antibody binds to paper while secondary antibody binds to the primary antibody. Blot is incubated with the primary antibody of interest. Non bound antibodies are usually washed off from the membrane. The presence of bound antibody is detected by incubating it with secondary antibody of interest. Secondary antibody is usually attached with an enzyme so that its presence can be detected on a nitrocellulose membrane. Secondary antibody can also be attached with radioactive isotopes. The color emitted determines the presence of particular protein.
Blotting techniques have wide application in today's world. Southern Blotting helps in determining the nucleic acid sequence of animal or human being while Northern blotting is helpful in gene expression.
About Author / Additional Info:
Trending Articles ( Receiving maximum views in the last few days ) |
When a figure is given in a problem, it may be effective to express relationships among the various parts of the figure using arithmetic or algebra.
• This strategy is used in the following two sample questions.
This is a Quantitative Comparison question.
Quantity A Quantity B PS SR
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
From the figure, you know that PQR is a triangle and that point S is between points P and R, so and You are also given that However, this information is not sufficient to compare PS and SR. Furthermore, because the figure is not necessarily drawn to scale, you cannot determine the relative sizes of PS and SR visually from the figure, though they may appear to be equal. The position of S can vary along side PR anywhere between P and R. Following are two possible variations of the figure, each of which is drawn to be consistent with the information
Note that Quantity A is greater in Variation 1 and Quantity B is greater in Variation 2. Thus, the correct answer is Choice D, the relationship cannot be determined from the information given.
This is a Numeric Entry question.
Results of a Used-Car Auction
Small Cars Large Cars Number of cars offered 32 23 Number of cars sold 16 20 Projected sales total for cars offered (in thousands) $70 $150 Actual sales total (in thousands) $41 $120
For the large cars sold at an auction that is summarized in the table above, what was the average sale price per car?
From the table above, you see that the number of large cars sold was 20 and the sales total for large cars was $120,000 (not $120). Thus the average sale price per car was The correct answer is $6,000 (or equivalent).
(In numbers that are 1,000 or greater, you do not need to enter commas in the answer box.)
Register for the GRE General Test
The official practice tests from the maker of the test gives you the experience of taking the real, computer-delivered GRE General Test and more!
Approach test day with more confidence, knowing you can send the scores that show your personal best — only with the ScoreSelect® option. |
A hearing test (audiometry) measures the quietest sound and individual can hear, at least 50% of the time – known as hearing threshold level (HTL). An individual’s HTL at different frequencies is recorded in an audiogram for each ear, and this information can be used to categorise hearing as within normal range, or a hearing loss that can range from mild to profound. Otoscopy and tympanometry is used alongside audiometry to identify any abnormalities of the middle ear that may be affecting the hearing, e.g. perforation or glue ear.
In hospitals and hearing aid dispensing practices, audiometry and tympanometry is typically carried out by audiologists who are trained to undertake these procedures during their university course. In the UK, audiometry is carried out according the British Society of Audiology Recommended Procedure.
However, basic hearing tests and tympanometry can also be performed by GPs, occupational health nurses, audiometricians, assistant technical officers, teachers of the deaf and hearing aid assistants. The British Society of Audiology has a Practice Guidance Document for “Hearing assessment in general practice, schools and health clinics: guidelines for professionals who are not qualified audiologists”.
A British Society of Audiology accredited course in basic audiometry and tympanometry trains non-audiologists to undertake hearing tests in the field and interpret the results, according to minimum training criteria. |
A military power during the 17th century, Sweden has not participated in any war in almost two centuries. An armed neutrality was preserved in both World Wars. Sweden’s long-successful economic formula of a capitalist system interlarded with substantial welfare elements has recently been undermined by high unemployment, rising maintenance costs, and a declining position in world markets. Questions regarding the benefit of remaining a monarchy have also plagued Sweden. The royal family is still strongly supported, and has worked closely with the Social Democrats to change the state of Sweden;s economic factors.
Indecision over the country’s role in the political and economic integration of Europe caused Sweden not to join the EU until 1995, and to forgo the introduction of the euro in 1999. During the 20th century, Sweden developed into a modern welfare state. This was made possible by a favorable political and economic development in the Nordic countries.
Since the late 19th century, the Nordic countries developed from agrarian societies to fully industrialized societies. Parallel with economic development, democratic institutions and parlimentarism were introduced. In 1865, the Diet of the Four Estates was replaced with a 2-chamber parliament. The elections took place through universal elections, in which only members of a certain economic elite had the right to vote. In 1907 a voting reform was signed and the universal right to vote gained legal force, but only for men. Women were not allowed to vote until 1921.
The "Swedish Model" was put into place in 1932 by the Social Democrats because of the high unemployment rate. The unemployed were provided with meaningful jobs by the State. This would vitalize the economy and create new jobs by the State. This of course heavily increased the taxes. In the 70's the economic growth for Sweden was reduced. The Social Democratic government began to borr… |
Tomography is the process of generating a 2-dimensional image of a slice or section through a 3-dimensional object. Similar to looking at one slice of bread within the whole loaf.
- A CT scan uses data from several X-ray images of structures inside the body and converts them into pictures.
- The technique utilizes digital geometry processing to generate 3D images.
- CT scans are a source of ionizing radiation and can cause cancer.
- A CT scanner emits a series of narrow beams through the human body, producing more detail than standard single beam X-rays.
- CT scanners are able to distinguish tissues inside a solid organ.
- Contrast dyes are sometimes used to improve the clarity of the image.
- CT scanning is particularly useful for getting detailed 3D images of certain parts of the body, such as soft tissues, blood vessels, and the brain.
- The images created are analyzed by radiologists.
- Unlike MRI scans, a CT scan uses X-rays.
- A CT scan is able to illustrate organ tear and organ injury quickly and so is often used for accident victims.
What is a CT scan?
The CT scanner uses digital geometry processing to generate a 3-dimensional (3D) image of the inside of an object. The 3D image is made after many 2-dimensional (2D) X-ray images are taken around a single axis of rotation – in other words, many pictures of the same area are taken from many angles and then placed together to produce a 3D image.
The Greek word tomos means “slice”, and the Greek word graphein means “write”.
Although CT is a useful tool for assisting diagnosis in medicine, it is a source of ionizing radiation and can cause cancer. The National Cancer Institute1 advises patients to discuss the risks and benefits of computerized tomography with their doctors.
How do CT scans work?
A CT imaging suite.
A CT scanner emits a series of narrow beams through the human body as it moves through an arc, unlike an X-ray machine which sends just one radiation beam. The final picture is far more detailed than an X-ray image.
Inside the CT scanner there is an X-ray detector which can see hundreds of different levels of density. It can see tissues inside a solid organ. This data is transmitted to a computer, which builds up a 3D cross-sectional picture of the part of the body and displays it on the screen.
Sometimes a contrast dye is used because it shows up much more clearly on the screen. If a 3D image of the abdomen is required the patient may have to drink a barium meal. The barium appears white on the scan as it travels through the digestive system. If images lower down the body are required, such as the rectum, the patient may be given a barium enema. If blood vessel images are the target, the barium will be injected.
The accuracy and speed of CT scans may be improved with the application of spiral CT. The X-ray beam takes a spiral path during the scanning – it gathers continuous data with no gaps between images. For a spiral scan of the chest, for example, the patient will be asked to hold his/her breath for a few seconds.
What are they like for patients?
Most places will provide the patient with a gown. He/she will need to undress, usually down to their underwear, and put the gown on. If the place does not provide a gown the patient should wear loose-fitting clothes.
Doctors may ask the patient to fast (eat nothing) and even refrain from consuming liquids for a specific period before the scan.
The patient will be asked to lie down on a motorized examination table, which then goes into the giant doughnut-like machine. The couch with the patient goes into the doughnut hole.
Some patients may be given a contrast dye or substance which is either swallowed, given as an enema, or injected. This improves the picture of some blood vessels or tissues. If a patient is allergic to contrast material he/she should tell the doctor beforehand. There are some medications that reduce allergic reactions to contrast materials.
As metal interferes with the workings of the CT scanner the patient will need to remove all jewelry and metal fastenings. In the majority of cases the patient will lie on his/her back, facing up. But sometimes it may be necessary to lie face-down or sideways.
After the machine has taken one X-ray picture, the couch will move slightly, and then another picture is taken, etc. The patient needs to lie very still for best results.
During the scan everybody except for the patient will leave the room. The radiographer will still be able to communicate with the patient, and vice-versa, through an intercom. If the patient is a child, a parent or adult might be allowed to stand or sit nearby – that person will have to wear a lead apron to prevent radiation exposure.
Pregnancy – any woman who suspects she may be pregnant should tell her doctor beforehand. The UK’s National Health Service2 warns that CT scans are not recommended for pregnant mothers because of the risk that X-rays might harm the unborn baby.
Claustrophobia – patients who suffer from claustrophobia need to tell their doctor or radiographer beforehand, advises Cancer Research UK3. The patient may be given an injection or tablet to calm them down before the scan.
Breastfeeding – the State Government of Victoria4 in Australia says that mothers should avoid breastfeeding their babies for about 24 hours after a CT scan if an iodinated intravenous dye was used. The dye may pass into the breast milk. If you are breastfeeding, tell your doctor beforehand.
When are CT scans used?
CT scanning is useful to get a very detailed 3D image of certain parts of the body, such as soft tissues, the pelvis, blood vessels, the lungs, the brain, abdomen, and bones.
It is often the preferred method of diagnosing many cancers, such as liver, lung, and pancreatic cancers. The image allows a doctor to confirm the presence of a tumor. The tumor’s size can be measured, plus its exact location, as well as to determine how much the tumor has affected nearby tissue.
A scan of the head can provide the doctor with important information about the brain – he/she may want to know whether there is any bleeding, swelling of the arteries, or tumors.
A CT scan will tell the doctor whether the patient has a tumor in his/her abdomen, and whether any internal organs in that area are swollen or inflamed. It will reveal whether there are lacerations of the spleen, kidneys or liver.
As a CT scan can detect abnormal tissue it is a useful device for planning areas for radiotherapy and biopsies.
A CT scan can also provide valuable data on the patient’s vascular condition. Vascular refers to blood flow. Many vascular conditions can lead to stroke, kidney failure, and even death. It can help a doctor assess bone diseases, bone density, and the state of the patient’s spine.
A CT scan can reveal vital data about injuries to the patient’s hands, feet and other skeletal structures – even small bones can be seen clearly, as well as their surrounding tissue.
Who analyzes the image?
A radiologist who is trained in supervising and interpreting radiology examinations will analyze the images.
A radiologist who is trained in supervising and interpreting radiology examinations will analyze the images and send his/her report to the patient’s doctor. A radiologist is a doctor.
Radiology is a branch of medicine. A radiologist is a fully qualified doctor who specializes in radiology – MRI, CT scans, radiographs, nuclear medicine scans, mammograms and sonograms.
A radiologic technologist or radiographer is the X-ray technician. The person who takes the X-rays.
The radiologist is a doctor; the radiographer is not a doctor.
What are the differences between CT and MRI?
The differences between CT and MRI scans are as follows:
- A CT scan uses X-rays. An MRI does not use X-rays; it uses magnets and radio waves.
- A CT scan does not show tendons and ligaments, an MRI does.
- MRI is better for looking at the spinal cord.
- A CT scan is better for looking at cancer, pneumonia, abnormal chest x-rays, bleeding in the brain (especially from injury).
- A brain tumor is better seen on MRI.
- A CT scan shows organ tear and organ injury more quickly – so it may be the best choice for accident victims.
- Broken bones and vertebrae are better seen on CT scan.
- CT scans are better at visualizing the lungs and organs in the chest cavity between the lungs. |
Detail from "The Signing of the Treaty of Green Ville" painting, by Howard Chandler Christy, 1945
William Wells was one of the best known frontiersmen in the Ohio Country in the years after the American Revolution.
William Wells was born in about 1770. Wells was captured by the Miami when he was twelve years old, and a Miami leader named Little Turtle raised him as his own son. Wells also married Little Turtle's daughter, Sweet Breeze. During the late 1780s and the early 1790s, Wells assisted Little Turtle in stopping white settlers from encroaching upon American Indian land. He fought with the American Indians of the Northwest Territory against the army of General Arthur St. Clair in 1791. By 1794, Wells had a change of heart. Some people believe that he could no longer stand the bloodshed in the Northwest Territory in the late 1780s and the early 1790s. In any case, Wells joined the army of General Anthony Wayne in 1794. Wayne hoped to secure the southwestern portion of modern-day Ohio from the area's resident American Indian nations. Wells served as a scout and interpreter for Wayne. He eventually attained the rank of captain and was present at the negotiating and signing of the Treaty of Greenville in 1795. Under this treaty, Ohio Country American Indians were made to give up all of their lands in what is now present-day Ohio except for the northwestern corner of the state.
Wells retired from the military after the Treaty of Greenville and settled near Fort Wayne, Indiana, with his wife. He lived as a farmer and traded goods with the local American Indians as well. In 1802, President Thomas Jefferson appointed Wells as what was then called an "Indian agent". Wells served the United States government by negotiating further treaties with the American Indians He held this position until 1809. As a show of gratitude for Wells' work as an Indian agent and as an interpreter and scout for Wayne, the United States Congress gave Wells 320 acres of land near Fort Wayne in 1808. During the War of 1812, Wells returned to military service. On August 15, 1812, Wells led a force of American soldiers from Fort Dearborn near present-day Chicago. Potawatomi forces ambushed Wells' troops. Wells was killed in the ensuing battle.
- Anson, Bert. The Miami Indians. Norman: University of Oklahoma Press, 1970.
- Hurt, R. Douglas. The Ohio Frontier: Crucible of the Old Northwest, 1720-1830. Bloomington, IN: Indiana University Press, 1996.
- McDonald, John. Biographical Sketches of General Nathaniel Massie, General Duncan McArthur, Captain William Wells and General Simon Kenton: Who were Early Settlers in the Western Country. Cincinnati, OH: E. Morgan and Son, 1838.
- Young, Calvin M. Little Turtle (Me-she-kin-no-quah): The Great Chief of the Miami Indian Nation; Being a Sketch of His Life Together with that of Wm. Wells and Some Noted Descendants. Greenville, OH: Calvin M. Young, 1917. |
Reading Together: Tips for Parents of Children with Speech and Language Problems
Children with speech and language problems may have trouble sharing their thoughts with words or gestures. They may also have a hard time saying words clearly and understanding spoken or written language. Reading to your child and having her name objects in a book or read aloud to you can strengthen her speech and language skills.
Infants and toddlers
Helping your child love books
You'll find sharing books together is a great way to bond with your son or daughter and help your child's development at the same time. Give your child a great gift that will last for life — the love of books.
Children with speech and language problems may have trouble sharing their thoughts with words or gestures. They may also have a hard time saying words clearly and understanding spoken or written language. Reading to your child and having her name objects in a book orread aloud to you can strengthen her speech and language skills.
Tips for reading with your infant or toddler
Each time you read to your child, you are helping her brain to develop. So read to your child every day. Choose books that you think your child will enjoy and will be fun for you to read.
Since younger children have short attention spans, try reading for a few minutes at a time at first. Then build up the time you read together. Your child will soon see reading timeas fun time!
- Read the same story again and again. The repetition will help her learn language.
- Choose books with rhymes or songs. Clap along to the rhythm and help your child clap along. As your child develops, ask her to fill in words. ("Twinkle twinkle little star. How I wonder what you ____.")
- Point to pictures and talk about them. ("Look at the silly monkey!") You can also ask your child to point to certain pictures. ("Where's the cat?")
- Talk about events in your child's life that relate to the story. ("That bear has blue pajamas just like you do!")
- Ask your child questions about the story. ("Is that bunny hiding?") As your child
Suggested books for your infant or toddler
- My Very First Mother Goose or Dr. Seuss books with their rhyming stories
- Each Peach Pear Plum, by Allan and Janet Ahlberg
- Chicka Chicka Boom Boom, by Bill Martin, Jr.
Preschool and school-age children
Helping your preschooler or school-age child love books
When you read to your child often and combine reading time with cuddle and play time, your child will link books with fun times together. So continue to read to your child every day. Choose books that are on your child's language level and that your child likes.
- Discuss the story with your child. ("Why do you think the monkey stole the key?")
- Help your child become aware of letter sounds. (While pointing to a picture of a snake, ask: "What sound does a snake make?") As your child develops, ask more complex questions. (While pointing to a picture of a ball, ask: "What sound does 'ball' start with?")
- Play sound games with your child. List words that rhyme ("ball," "tall") or start with the same sound ("mommy," "mix").
Suggested books for your preschooler or school-age child
Books to help children and parents learn more about speech and language problems
- Let's Talk About Stuttering, by Susan Kent (Ages 4–8)
- Coping with Stuttering, by Melanie Ann Apel (Ages 9–12)
- Childhood Speech, Language, and Listening Problems, by Patricia Hamaguchi
- Does My Child Have a Speech Problem?, by Katherine Martin
- The New Language of Toys: Teaching Communication Skills to Children with Special Needs: A Guide for Parents and Teachers, by Sue Schwartz and Joan Miller
- The Parent's Guide to Speech and Language Problems, by Debbie Feit and Heidi Feldman
For more information
Developmental Disabilities Literacy Promotion Guide for Pediatric Healthcare Providers. ©2010 Reach Out and Read, Inc. All rights reserved. Reprinted with permission. |
The Anglo–Spanish War (1585–1604) was an intermittent conflict between the kingdoms of Spain and England that was never formally declared. The war was punctuated by widely separated battles, and began with England's military expedition in 1585 to the Netherlands under the command of the Earl of Leicester in support of the resistance of the States General to Habsburg rule.
The English enjoyed victories at Cádiz in 1587, and over the Spanish Armada in 1588, but lost the initiative upon the failure of the Drake Norris Expedition in 1589. Two further Spanish armadas were sent but were frustrated in their objectives owing to adverse weather.
In the decade following the defeat of the Armada, Spain strengthened its navy and was able to safeguard its trade routes of precious metals from the Americas. The war became deadlocked around the turn of the 17th century during campaigns in Brittany and Ireland. The war was brought to an end with the Treaty of London, negotiated in 1604 between representatives of Philip III and the new king of England, James I, and was very favorable to Spain. Spain and England agreed to cease their military interventions in Ireland and the Spanish Netherlands, respectively, and the English renounced high seas privateering. Both parties had achieved some of their aims, but each of their treasuries had almost been exhausted in the process.
In the 1560s, Philip II of Spain, the champion of the Roman Catholic cause, sought to frustrate English crown policy for both religious and commercial reasons. The Protestant Elizabeth I of England, whom the Catholic Church did not recognise as the rightful English monarch, had antagonised Catholics by making the Church of England the official church in the Kingdom. The English also tended to support the Protestant cause in the Netherlands, which was increasingly hostile to Spanish government.
Philip and the Catholic Church considered Mary, Queen of Scots, a Catholic cousin of Elizabeth's, to be the rightful Queen of England. In 1567, Mary was imprisoned and forced to abdicate the Scottish throne in favour of her infant son, James. Thereafter she fled to England, where Elizabeth had her imprisoned. Over the next two decades, opponents of Elizabeth and James continually plotted to have Mary placed on the throne of one or both kingdoms.
The activities of English privateers (considered pirates by the Spanish) on the Spanish Main and in the Atlantic seriously affected Spain's royal revenues. The English trans-Atlantic slave trade - started by Sir John Hawkins in 1562 - gained the support of Elizabeth, even though the Spanish government complained that Hawkins' trade with their colonies in the West Indies constituted smuggling.
In September 1568, a slaving expedition led by Hawkins and Sir Francis Drake was surprised by the Spanish, and several ships were captured or sunk, at San Juan de Ulúa, near Veracruz, Mexico. This engagement soured Anglo-Spanish relations, and in the following year the English detained several treasure ships sent by the Spanish to supply their army in the Netherlands. Drake and Hawkins, amongst others, intensified their privateering as a way to break the Spanish monopoly on Atlantic trade.
Seeing the Protestant cause as central to her survival, Elizabeth provided assistance to the Protestant forces in the French Wars of Religion and in the Dutch Revolt against Spain. Philip, meanwhile, was fiercely opposed to the spread of Protestantism, and in addition to financing the Catholic League in the French wars, supported the Second Desmond Rebellion in Ireland, in which Irish Catholics revolted against Elizabeth, from 1579 to 1583.
In 1585, Elizabeth signed the Treaty of Nonsuch with the Dutch, agreeing to provide them with men, horses, and a subsidy. Philip II took this to be a declaration of war against his government.
War broke out in 1585. Drake sailed for the West Indies and sacked Santo Domingo, Cartagena de Indias, and Saint Augustine in Florida. England joined the Eighty Years' War on the side of the Dutch Protestant United Provinces, who had declared their independence from Spain. Philip II planned an invasion of England, but in April 1587 his preparations suffered a setback when Drake burned 37 Spanish ships in harbour at Cádiz. In the same year, the execution of Mary, Queen of Scots on 8 February outraged Catholics in Europe, and her claim on the English throne passed (by her own deed of will) to Philip. On 29 July, he obtained Papal authority to overthrow Elizabeth, who had been excommunicated by Pope Pius V, and place whomever he chose on the throne of England.
Main articles: Spanish Armada http://en.wikipedia.org/wiki/Spanish_Armada, Spanish Armada in Ireland' http://en.wikipedia.org/wiki/Spanish_Armada_in_Ireland
In retaliation for the execution of Mary, Philip vowed to invade England to place a proper Catholic monarch on its throne. He assembled a fleet of about 130 ships, containing 8,000 soldiers and 18,000 sailors. To finance this endeavour, Pope Sixtus V had permitted Philip to collect crusade taxes. Sixtus had promised a further subsidy to the Spanish should they reach English soil.
On 28 May 1588, the Armada set sail for the Netherlands, where it was to pick up additional troops for the invasion of England. However, the English navy inflicted a defeat on the Armada in the Battle of Gravelines before this could be accomplished, and forced the Armada to sail northward. It sailed around Scotland, where it suffered severe damage and loss of life from stormy weather.
The defeat of the Armada revolutionised naval warfare and provided valuable seafaring experience for English oceanic mariners. Furthermore, the English were able to persist in their privateering against the Spanish and continue sending troops to assist Philip II's enemies in the Netherlands and France but these efforts brought few tangible rewards for England. One of the most important effects of the event was that the Armada's failure was seen as a "sign" that God supported the Protestant Reformation in England. (See He blew with His winds, and they were scattered.)
Main article: English Armada http://en.wikipedia.org/wiki/English_Armada
The defeat of the Spanish Armada was not a decisive victory and the so called "Protestant Wind" did little to finish the war. An "English Armada" under the command of Drake and Sir John Norreys was dispatched in 1589 to torch the Spanish Atlantic navy, which had largely survived the Armada adventure, and was refitting in Santander, Corunna and San Sebastián in northern Spain. It was also intended to capture the incoming Spanish treasure fleet and expel the Spanish from Portugal - ruled by Philip since 1580 - in favour of the Prior of Crato. The English Armada was doomed from the start and was a complete failure. Had the expedition succeeded in its objectives, Spain may have been compelled to sue for peace, but owing to poor organisation and utter incompetence, the invading force was repelled with heavy casualties on the English side and failed to take Lisbon. Sickness then struck the expedition, and finally a portion of the fleet led by Drake towards the Azores was scattered in a storm. In the end, Elizabeth sustained a severe loss to her treasury, for she had been compelled into a joint venture in order to finance the expedition, and was first among the stockholders.
In this period of respite, the Spanish were able to refit and retool their navy, partly along English lines. The pride of the fleet were named The Twelve Apostles - twelve massive new galleons - and the navy proved itself to be far more effective than it had been before 1588. A sophisticated convoy system and improved intelligence networks frustrated and broke up the English privateering on the Spanish treasure fleet during the 1590s. This was best demonstrated in the failures of expeditions by Sir Martin Frobisher, John Hawkins and the Earl of Cumberland in the early part of the decade, as well as in the repulse of the squadron that was led by Effingham in 1591 near the Azores, who had intended to ambush the treasure fleet. It was in this battle that the Spanish captured the English flagship, the Revenge, after a stubborn resistance by its captain, Sir Richard Grenville. Throughout the 1590s, enormous convoy escorts enabled the Spanish to ship three times as much silver than in the previous decade.
In 1590, the Spanish landed a considerable force of tercios in Brittany to assist the French Catholic League, expelling the English and French Protestant forces from the area. However, Anglo-French forces retained Brest.
Both Drake and Hawkins died of disease during a disastrous expedition against Puerto Rico, Panama, and other targets in the Spanish Main in 1595–1596, a severe setback in which the English suffered heavy losses in soldiers and ships. In 1595, a Spanish force, under Don Carlos de Amesquita, raided Penzance and several surrounding villages.
In 1596, an Anglo-Dutch expedition managed to sack Cádiz, causing significant loss to the Spanish fleet, and leaving the city in ruins. But the Spanish commander had been allowed the opportunity to torch the treasure ships in port, sending the treasure to the bottom from where it was recovered later.
Normandy added a new front in the war and the threat of another invasion attempt across the channel. Elizabeth sent a further 2,000 troops to France after the Spanish took Calais. Further battles continued until 1598, when Henri IV's conversion to Catholicism won him widespread French support for his claim to the throne; the French civil war had turned against the hardliners of the Catholic League and finally France and Spain signed the Peace of Vervins, ending the last of the Wars of Religion and Spanish intervention with it.
The English suffered a setback in the Islands Voyage against the Azores in 1597. The Habsburgs also struck back with the Dunkirkers, who took an increasing toll of Dutch and English shipping.
In 1595, the Nine Years War in Ireland had begun, when Ulster lords Hugh O'Neill and Red Hugh O'Donnell rose up against English rule with fitful Spanish support, mirroring the English support of the Dutch rebellion. While England struggled to contain the rebels in Ireland, the Spanish attempted two further Armadas, in 1596 and 1597: the first was destroyed in a storm off northern Spain, and the second was frustrated by adverse weather as it approached the English coast undetected. King Philip II died in 1598, and his successor, Philip III, continued the war, but in a less determined manner.
At the end of 1601, a final armada was sent north, this time a limited expedition intended to land troops in southern Ireland to assist the rebels. The Spanish entered the town of Kinsale with 3,000 troops and were immediately besieged by the English. In time, their Irish allies arrived to surround the besieging force, but poor coordination with the rebels led to an English victory at the Battle of Kinsale. Rather than attempt to hold Kinsale as a base to harry English shipping, the Spanish accepted terms of surrender and returned home, while the Irish rebels hung on, only to surrender in 1603, just after Queen Elizabeth I died.
When James I came to the English throne, his first order of business was to negotiate a peace with Philip III of Spain, which was concluded in the Treaty of London, 1604.
With the Spanish successfully defending their rapidly expanding colonial trade and thereby overcoming their financial crisis, the Irish war grinding on with Spanish material support, and English trade under increasing attack, the conflict was turning into a war of attrition in which England was continually being drained of men and treasure. English settlement in North America was delayed until after the signing of the peace with Spain in the immediate post Tudor period. This enabled Spain to consolidate its New World territories. Spain had been able to effectively deny the Atlantic sea lanes to English colonial and trading efforts until England had agreed to most Spanish conditions. Furthermore, Spanish support helped the French Catholic League force Henry IV to convert to Catholicism, ensuring that France would remain Catholic - a major success for the Counter-Reformation. However, England also accomplished some of its war aims: it had successfully defended its Protestant revolution; it maintained control of Ireland; by supporting the Protestant Dutch, albeit with limited forces and very little success, and by the diversion of substantial Spanish resources, it had played a part in averting a complete Spanish reconquest of the Netherlands (seen as a threat); and by supporting Henry IV, had ensured that France would remain friendly. |
Sound waves 'can help' early tsunami detection
People in high-risk tsunami areas could soon be helped by an early-warning alarm system using sound waves that is being developed by scientists.
Mathematicians think they have devised a way of calculating the size and force of a tsunami in advance of it hitting land, which can help early detection.
Experts say naturally occurring high-speed acoustic gravity waves are created after "tsunami trigger events".
Cardiff University scientists hope to make a real-time early warning system.
Alaska was under a tsunami warning earlier this week after a 7.9-magnitude earthquake struck 280km (173 miles) off the coast of the American state.
The deadliest recorded tsunami was the 2004 Boxing Day Indian Ocean tsunami, which killed almost 230,000 people in 11 different countries.
But scientists in Cardiff hope to help give extra warning time for tsunamis by using the fast-moving underwater sound waves.
"By taking measurements of acoustic gravity waves, we basically have everything we need to set off a tsunami alarm," said Dr Usama Kadri, lead author for the study from Cardiff University's school of mathematics.
Underwater earthquakes are triggered by the movement of tectonic plates on the ocean floor and are the main cause of tsunamis.
Scientists say sound waves can travel over 10 times faster than tsunamis and spread out in all directions, regardless of the trajectory of the tsunami, making them easy to pick up using standard underwater hydrophones. They say this is an ideal source of information for early warning systems.
In a new study published in the Journal of Fluid Mechanics, Cardiff University scientists show how the key characteristics of an earthquake - such as its location, duration, dimensions, orientation and speed - can be determined when the gravity waves are detected by a single hydrophone in the ocean.
The sound waves move through the deep ocean at the speed of sound and can travel thousands of metres below the surface.
Tsunamis are currently detected by floating buoys that are able to measure pressure changes in the ocean caused by tsunamis.
However, experts say the technology relies on a tsunami physically reaching the buoys.
The current technology also requires the distribution of a huge number of expensive buoys in oceans all around the world.
"Though we can currently measure earthquakes using seismic sensors, these do not tell us if tsunamis are likely to follow," Dr Kadri continued.
"Using sound signals in the water, we can identify the characteristics of the earthquake fault, from which we can then calculate the characteristics of a tsunami. Since our solution is analytical, everything can be calculated in near real-time.
"Our aim is to be able to set off a tsunami alarm within a few minutes from recording the sound signals in a hydrophone station." |
Use this popular, simple exercise to introduce some levity into the classroom and as an opportunity to demonstrate the importance of careful observations. Continue reading
Here’s STAO’s own Otto Wevers using his fascination with cycling to conduct an experiment to the test the buoyancy of his fat tire bike.
Our science guy Steve Spangler brought in extra help today to set-up today’s experiments with soda, bowling balls and lots of water. Continue reading
This demonstration reviews the concept of density. It examines why certain objects float or sink in water and highlights some interesting information about cola versus diet cola soft drinks.
Click here to download the complete demo
This demo is part of the STAO demo collection. Click here to check out all of them.
This activity helps to illustrate the particle theory and how it applies to solutions. Some earlier work using the particle theory is a prerequisite. Dissolving salt into water increases the water’s density, allowing more dense materials to float in the salt water which would have sunk in unsalted water.
A stratified science project from Science Buddies
By Science Buddies on May 26
You can stack books and stack blocks, but did you know you can also stack liquids? See if you can build your own liquid rainbow–in a single cup! Credit: George Retseck
Concepts:Physics Chemistry Density Liquids
You probably know that when solid objects are placed in liquid, they can sink or float. But did you know that liquids can also sink or float? In fact, it is possible to stack different layers of liquids on top of one another. The key is that all the different layers must have different densities. You can stack them by picking several liquids with a range of densities or by varying the density of one liquid by adding chemicals such as sugar or salt to it. If you choose colored liquids or add food coloring to each layer, you can even create a whole rainbow of colors in one single glass! Want to see for yourself? In this science activity you will stack several liquids—one by one—and create a colorful density column!
Click here for more details Source: Stacking Liquids – Scientific American
Hot water rises because its molecules are moving rapidly as they expand the water. Hot water is less dense, therefore, the same amount of water takes up the larger space. Hot water rises above the colder denser water. The resulting movement is called convection. The flowing displacement of a mass of cold fluid under large amounts of hotter fluid is the same phenomena seen with weather patterns where cold higher pressure air displaces and pushes up the hotter air. Hot air balloons also take advantage of this squeezing action. |
Southern Brown Bandicoots are medium-sized ground-dwelling native marsupials, who have a long pointed snout, small round ears, a large rump and short, thick tail. These animals are ecosystem engineers who contribute to improvements in soil quality but need dense or thick shrubby understorey plantings to provide shelter.
The South Coast Environment Centre and the Normanville Natural Resource Centre in conjunction with the Natural Resources Management Board are coordinating a volunteer group called the Bandicoot Recovery Action Group (BRAG) on the Fleurieu.
Southern Brown Bandicoots are a nationally listed under the Environment Protection and Biodiversity Conservation Act 1999 as an endangered species. The recovery of the Southern Brown Bandicoot in the Mount Lofty Ranges relies on the reduction of threatening processes, the enhancement and protection of suitable habitat and the establishment of connections between patches of remnant vegetation.
A first field trip at Deep Creek Conservation Park set up cameras in the area. This initial project is to help the Department for Environment & Water's Fire Management team who are keen to improve their understanding of the distribution of the Southern Brown Bandicoot in parks where prescribed burns are planned. While the primary objective of prescribed burning is typically to reduce the risk that bushfires pose to human life and assets, prescribed burns can also play a really important role in helping to manage habitat for native flora and fauna, and reduce the threat that bushfires present to them.
Improving knowledge about where bandicoots occur helps ensure that only small proportions of bandicoot habitat are burnt at any one time. It can also be used to help us better understand the habitat requirements of bandicoots and how quickly this habitat responds to fire. The Fire Ecology team and the Region's Threatened Fauna Ecologist have undertaken some monitoring in the Adelaide Hills to answer these questions but more survey work is needed to ensure we are managing bandicoot habitat in a way that helps this species prosper.
While bandicoots are known to occur in Deep Creek Conservation Park, it's not known how widespread they are.
Bandicoot Photos by Kirsten Abley
First training day |
Why are Leaves Green?
Since color is a perception, leaves are green simply because that's the way we see them!
But why do we see them that way? That's because of the stuff leaves are made of and how it plays with the light from the sun. One sort of stuff in the leaves is called chlorophyll. Chlorophyll is the main part of the leaf that makes them so good for nature. The chlorophyll takes carbon dioxide (CO2) from the air and uses water and energy from sunlight to create sugar. That sugar is food for the tree and for animals (like us) that eat the leaves or make yummy things like maple syrup from the tree sap.
The light energy that the chlorophyll uses is mainly the types that would look red or blue to us. That leaves the green energy to bounce off (or pass through) the leaves and reach our eyes. And when that light reaches our eyes we see the leaves as green.
Explore the NEXT TOPIC at this level.
Explore the NEXT LEVEL on this topic.
Ever wonder ... Why do leaves change color in the autumn?
Updated: Apr. 11, 2011 |
Study Will Explore How Solar Storms Affect Earth’s Atmosphere
By Edwin L. Aguirre
We all rely on local weather forecasts to plan our travels and outdoor activities, or even to decide whether to water the lawn.
But researchers like Prof. Paul Song in the Department of Physics & Applied Physics are also interested in “space weather,” the constantly changing environmental conditions in interplanetary space, especially between the sun’s atmosphere and earth’s outer atmosphere. While meteorologists deal with clouds, air pressure, wind, precipitation and the jet stream, space-weather scientists concentrate on changes in the ambient plasma (ionized gas), solar wind, magnetic fields, radiation and other matter in space.
“Predicting space weather is the next frontier in weather forecasting,” says Song, who directs UMass Lowell’s Center for Atmospheric Research (CAR).
A Challenging Problem
“Inclement space weather has increasingly become a threat to modern space technologies and services, such as GPS, shortwave radio and satellite communications,” says Song.
While large space-weather events, known as space storms or solar storms, can trigger spectacular displays of auroras, the high-energy particles produced by these storms can harm the health of spacewalking astronauts as well as airline passengers and crews flying at high altitudes along polar routes. Geomagnetic storms can also create a surge in electrical current, overloading electric power grids and damaging transmission lines and oil pipelines, notes Song.
Solar storms start out with coronal mass ejections, or CMEs, which are enormous bubbles of plasma flowing out from the sun.
“CMEs travel through interplanetary space and eventually hit Earth, potentially affecting our lives and those of orbiting satellites,” says Song. “The effects we feel on the Earth depend on how the interactions take place between a CME and Earth’s magnetosphere, a region well above the atmosphere where most satellites fly, and the ionosphere, which is roughly the top of the atmosphere.”
Song, together with Distinguished Research Prof. Vytenis Vasyliunas and Asst. Research Prof. Jiannan Tu of the CAR, recently received a three-year grant from NASA worth more than $356,000 to study these interactions.
Song says the processes taking place between the magnetosphere and the ionosphere are particularly complicated since they involve different types of matter.
“For example, Earth’s atmosphere, known as the thermosphere in this region, consists of many different species of neutral atoms and molecules, while the interplanetary and magnetospheric particles are electrically charged. Auroras are a result of neutral atoms colliding with charged particles,” explains Song.
Because the interactions involved in the magnetosphere-ionosphere coupling are very complicated, current prediction models have to substantially simplify the mathematical and physical descriptions, based on a so-called steady-state description, he says.
“A good analogy is using a series of photos to describe a martial arts show and each photo has to be taken when the performer is posing on the floor and not moving,” says Song. “We are developing the most advanced theoretical models and numerical algorithms to describe the coupling. In the analogy, we are developing a video camera.” |
Fluorescence in situ hybridization (FISH) is a test that "maps" the genetic material in human cells, including specific genes or portions of genes.
Because a FISH test can detect genetic abnormalities associated with cancer, it's useful for diagnosing some types of the disease. When the type of cancer has previously been diagnosed, a FISH test also may provide additional information to help predict a patient's outcome and whether he or she is likely to respond to chemotherapy drugs.
In breast cancer patients, for example, a FISH test on breast cancer tissue removed during a biopsy can show whether the cells have extra copies of the HER2 gene. Cells with extra copies of the gene have more HER2 receptors, which receive signals that stimulate the growth of breast cancer cells. So patients with extra copies of the gene are more likely to respond to treatment with Herceptin (trastuzumab), a drug that blocks the ability of HER2 receptors to receive growth signals.
Because FISH testing is expensive and not widely available, it's not as commonly used as another breast cancer test: ImmunoHistoChemistry (IHC).
How FISH Works
During a FISH test using a sample of the patient's tissue, special colored dyes are attached to specific parts of certain chromosomes in order to visualize and count them under a fluorescent microscope and to detect cancer-promoting abnormalities.
Abnormalities found in cancer cells include:
- Translocation. Part of one chromosome has broken off and relocated itself onto another chromosome.
- Inversion. Part of a chromosome is in reverse order, although it is still attached to the correct chromosome.
- Deletion. Part of a chromosome is missing.
- Duplication. Part of a chromosome has been copied and the cell contains too many copies.
Compared to standard cytogenetic (cell gene) tests, one advantage of FISH is that it can identify genetic changes that are too small to be seen under a microscope. Another advantage is that FISH doesn't have to be performed on cells that are actively dividing. Because other tests cannot be performed until cancer cells have been growing in lab dishes for about two weeks, the process usually takes about three weeks. FISH results are usually available within a few days.
Examples of FISH Tests for Cancer
Although the FISH test is often used to analyze genetic abnormalities in breast cancer, it also can provide important information about many other types of cancer.
In the diagnosis of bladder cancer, for example, FISH testing of urinary cells may be more reliable than a standard test that looks for abnormal cells. In addition, FISH may detect bladder cancer recurrences three to six months earlier.
FISH also can identify chromosomal abnormalities in leukemias, including chronic lymphocytic leukemia (CLL) cells, some of which are associated with aggressive forms of the disease. Patients with more aggressive forms of CLL may need urgent treatment, while those with less aggressive forms may only require observation. |
Constantine II (king of Greece)
Constantine II, 1940–, king of the Hellenes; also known as Constantine XIII. He was appointed regent in 1964 and succeeded to the throne the same year on the death of his father, King Paul. In 1967, after a military junta had seized political power in Greece, Constantine made an abortive attempt to overthrow the generals. When the coup failed, he and his family fled into exile. The junta declared him formally deposed in June, 1973, and established a republic. In Dec., 1974, after the overthrow of the junta, the Greek voters chose not to restore the monarchy. Constantine was stripped of his Greek citizenship in 1994. In 2002 the European Court of Human Rights ruled that Greece had to compensate the former king for property nationalized after the royal family fled the country. |
12. State and Local Governments
Large-scale public works projects require federal and state governments to cooperate and compromise, especially when deciding who pays for what. The construction of the Interstate Highway System was a crowning achievement of this sometimes strained partnership.
Governors. Mayors. State Representatives. City Council members. Sheriffs.
Beneath the layer of the national government lies a complex web of state and local officials and institutions. The nation's founders concern over tyranny transcended their separation of power among the three branches of government. Power is also divided by level, with each layer performing its designated responsibility. States and communities would even have the freedom to design their own institutions and create their own offices. This creates a multitude of "laboratories" where government leaders at any level could see which systems were successful and which were problematic.
This well-built Governor looks like he could be a wrestler. Wait, he was a wrestler: Jesse Ventura of Minnesota broke onto the local and national political scene by becoming the first Reform Party candidate to win the governorship of a state.
The states had constitutions years before the United States Constitution was even written. Since the Declaration of Independence, states have written a total of about 150 constitutions, with several states writing new ones frequently. State constitutions tend to be quite a bit longer than the national one — an average of four times as long — so they also are more specific. As a result, they often are heavily amended and rather easily tossed out, at least in some states. State constitutions determine the structure, role, and financing of state and local levels of government.
Each of the 50 states has its own array of public officials, with no two states being exactly alike. But all of them have Governors, legislatures, and courts:
- Governors. In every state the Governor is chosen by popular vote, and most serve four-year terms. More than half of the states put limits on the number of times an individual may be elected called term limits. In most states, several other top officials are elected, including a Lieutenant Governor, a Secretary of State, and an Attorney General. In general, Governors have the authority to issue executive orders, prepare the state budget, make appointments, veto legislation, and to grant pardons to criminals. In states that tend to concentrate powers in the hands of a few, Governors have broader authority and more powers. In other states, power is spread out among many elected officials, or is strongly checked by the legislature.
- State legislatures. Every state has a bicameral, or two house, legislature, except for Nebraska, which has a unicameral body. State legislatures vary in size from 20 to 400, and are not necessarily in proportion to the size of the state's population. For example, New Hampshire has 400 members in its lower house. All states have guidelines for age, residency, and compensation, and most legislatures meet in annual sessions. Just as in the national legislature, many state legislators serve for several terms, creating a large body of professional politicians in the United States.
- State courts. Each state has its own court system, and most have a state Supreme Court. State judges have the final voice in the vast majority of cases in the United States since more come under state rather than federal jurisdiction. Most states have two types of courts — trial courts that handle issues from traffic fines to divorce settlements to murder, and appeals courts that hear cases appealed from lower courts.
Types of Local Governments
Local governments are generally organized into four types:
The organization of state and local governments varies widely across the United States. They have common specific features, but their organizations differ. Regardless of their design, state and local governments often have a far greater impact on people's lives than the federal government. Marriage, birth, and death certificates. School policies. Driving age and qualifications for licensure. Laws regarding theft, rape, and murder, as well as the primary responsibility of protecting citizens from criminals. These critical issues and many others are not decided by distant Washington authorities, but by state and local officials. |
This is a three page document. The first page is the lesson plan and the accompanying two pages contain the graphic organizers for the lesson. This lesson can and should be repeated every month or so, or at the very least every time you read a new story or book with the students.
The lesson plan includes aim, do-now, instructional objective, mini-lesson, definitions, questions for class discussion, activity to ensure understanding, and summation.
The graphic organizer / handout I find to be very useful because it goes the extra mile. Yes, it requires the students to use evidence from the text but it also requires them to think out of the box (literally since there are boxes on the organizer) and try to figure out how the plot of the text impacts the individual characters. |
Create an account
What is the difference between weathering and erosion?
Weathering is the physical or chemical breakdown of rock. Erosion is the removal of weathered pieces of rock to another place
What is the difference between mechanical and chemical weathering?
Mechanical weathering is the physical breakdown of rock into smaller pieces. Chemical weathering is the breakdown of rock by chemical processes.
How do water, air, and organisms cause chemical weathering?
Water, air, and chemicals released by organisms cause chemical weathering of rocks when they dissolve the minerals in a rock. They can also cause chemical weathering by reacting chemically with the minerals in the rock to form new substances.
How do mechanical and chemical weathering work together to speed up the weathering process?
Mechanical weathering breaks rocks down into smaller pieces. This gives the rock a larger surface area for chemical reactions to take place. Chemical weathering weakens rock, making it easier for it to be broken down by mechanical weathering.
the process in which wind, water, ice, or other things move pieces of rock and soil over Earth's surface (related word: erode)
to add a solid material to a liquid in such a way that its particles completely disperse into the liquid, usually becoming invisible within the liquid (related word: dissolution)
3. a chemical reaction in which oxygen combines with another substance; for example, when metal rusts
occurs when chemical reactions cause permanent changes to rocks and other physical features
. a cavity in the ground in limestone bedrock, caused by water erosion and proving a route for surface water to disappear underground
Please allow access to your computer’s microphone to use Voice Recording.
Having trouble? Click here for help.
We can’t access your microphone!
Click the icon above to update your browser permissions and try again
Reload the page to try again!Reload
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.
Please upgrade Flash or install Chrome
to use Voice Recording.
For more help, see our troubleshooting page.
Your microphone is muted
For help fixing this issue, see this FAQ.
Star this term
You can study starred terms together |
Nearly 100 years ago, electrode gaps (rod, sphere, or pipe) were used to limit overvoltages on equipment (Sakshaug, 1991). Some of these systems, particularly pipe gaps, may still be in service today. However, the characteristic of gap sparkover voltage vs. surge front time does not match up well with the strength vs. front characteristics of most insulation; that is, it is difficult to coordinate.
The next evolutionary step was to add a resistive element in series with the gap, in order to limit the power follow current after an arrester discharge operation. The current limiting would hopefully allow the arrester to clear this power follow current, instead of relying on a nearby breaker or fuse. At the same time, the resistor voltage during a discharge must be low enough that it does not allow an excessive voltage to appear on the protected equipment.
These competing requirements led to the use of expensive and complicated nonlinear resistive elements, some involving both solid and liquid materials with high maintenance burdens.
Beginning around 1930, silicon carbide (SiC) was used for the nonlinear resistive elements, leading to much better protective characteristics. Because the SiC would conduct significant current at nominal voltage, it was necessary to provide a sparkover gap that prevents conduction at nominal voltage. After an arrester discharge, these gaps must reseal against the power follow current, otherwise, the arrester would fail thermally. In the mid 1950s, active gaps were developed for SiC arresters.
These active gaps contain auxiliary elements that would:
- Pre-ionize the sparkover gap to obtain better surge protective levels and
- Elongate the power follow arc, and move its attachment points, to obtain better interruption performance.
SiC arresters were successfully applied on transmission systems up to 345 kV, but some limitations appeared with regard to switching surge protection, energy discharge capability, and pressure relief capability.
Having both gaps and SiC blocks, the arrester height increased to the point where it was difficult to vent the pressure built-up during a fault, which limited the arrester’s pressure relief rating.
Due to their discharge characteristics vs. frequency or front time, the SiC surge arresters were optimized for lightning. They were less effective for steeper-fronted surges and slow-fronted switching surges.
In the mid 1970s, metal oxide surge arresters were developed into commercial products (Sakshaug, et al., 1977). The metal oxide blocks are much more nonlinear than silicon carbide, so that they conduct only a few milliamperes at nominal AC voltage. It eventually became possible to dispense with gaps completely, although earlier designs made some use of gaps (see Fig. 1).
Metal oxide surge arresters have several major advantages over the earlier silicon carbide arresters:
- Active gaps are not necessary, leading to improved reliability
- Metal oxide can discharge much more energy per unit volume than silicon carbide
- Metal oxide provides better protection across the range of surge wavefronts than silicon carbide, and in fact, protects effectively against switching surges
- The decrease in arrester height, caused by eliminating sparkover gaps, leads to higher pressure relief ratings
Virtually all new applications will use metal oxide surge arresters. Metal oxide has enabled some new applications, like series capacitor protection and overhead line switching surge control that were not possible with silicon carbide.
However, many silicon carbide arresters are still in service. Some investigators have noted high silicon carbide arrester failure rates, due to moisture ingress, after several years of service on medium-voltage distribution systems. This experience does not necessarily apply to surge arresters in substations. If such problems arise, it would make sense to systematically replace silicon carbide arresters on a system. Otherwise, assuming that the original application was proper, the older silicon carbide arresters could remain in service.
Figure 1 shows the general use of gaps in surge arresters. The gapped design (1a) applies to silicon carbide, whereas the gapless design (1d) applies to the latest generation of metal oxide. One manufacturer used the shunt gap (1b) in early metal oxide arresters. At steady state, both nonlinear elements would support the nominal voltage, somewhat reducing the current. During a surge discharge, the shunt gap would sparkover to bypass the smaller section of metal oxide, thereby reducing the discharge voltage and providing somewhat better protection. Another manufacturer used the series gap with capacitive grading (1c) in early metal oxide arresters. At steady state, this decreases the voltage on the metal oxide.
During a surge discharge, the gap sparks over ‘‘immediately’’ due to the capacitive grading. The latest generations of metal oxide do not need these gaps, although there is some consideration for using gaps to achieve specific goals (e.g., coordinating arresters, withstanding temporary overvoltages). The surge arrester must be installed on something, such as a transformer tank or a pedestal. It must also be connected to the protected system, typically through a wire or lead. Later, it will be shown that these connections have important effects on the overall protection, especially for steep surges.
The pedestal and lead, both length and location, must be considered as part of the overall arrester installation. On distribution systems, a ground lead disconnector is often used with the surge arrester. If the arrester fails and then conducts current on a steady-state basis, the disconnector will detonate and disconnect the base of the arrester from ground. This should happen in approximately 1 sec, or faster.
The arrester may then remain connected to the system until maintenance personnel have a chance to replace it. No breaker or fuse need operate to isolate the failed arrester; if the arrester is the only thing that failed, no customers need to lose their electric service. Of course, the arrester is not providing any surge protection during this period with its ground lead disconnected. There would be a clear visual indication that the ground lead has been disconnected; it will be ‘‘hanging down’’ below the arrester. Regular visual inspections are necessary to maintain surge protection whenever ground-lead disconnectors are used.
Many surge arresters in substations or industrial facilities have been installed with ‘‘surge counters.’’
These are accessories to be installed in the surge arrester’s ground-lead connection. Two functions may be provided:
- A steady-state current meter, calibrated in mA. If this current increases over time, it may indicate thermal damage to the surge arrester. However, the presence of harmonics or external leakage currents would complicate the assessment.
- A counter indicates the number of surge current discharges above a certain threshold, which may depend on frequency or front time. Even if the count is accurate, it does not mean that the discharge voltage reached any particular level during those events.
To use surge counters effectively, it is important to track the readings on a regular basis, beginning at the time of commissioning.
SOURCE: Surge Arresters – Thomas E. McDermott |
Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
Record-breaking Detector May Aid Nuclear Inspections
For Immediate Release: March 14, 2006
Boulder, Colo.—Scientists at the Commerce Department’s National Institute of Standards and Technology (NIST) have designed and demonstrated the world’s most accurate gamma ray detector, which is expected to be useful eventually in verifying inventories of nuclear materials and detecting radioactive contamination in the environment.
The tiny prototype detector, described today at the American Physical Society national meeting in Baltimore, can pinpoint gamma ray emission signatures of specific atoms with 10 times the precision of the best conventional sensors used to examine stockpiles of nuclear materials. The NIST tests, performed with different forms of plutonium at Los Alamos National Laboratory,* also show the prototype greatly clarifies the complex X-ray and gamma-ray emissions profile of plutonium.
Emissions from radioactive materials such as uranium or plutonium provide unique signatures that, if accurately measured, can indicate the age and enrichment of the material and sometimes its intended purpose or origin.
The 1-square-millimeter (mm) prototype collects only a small amount of radiation, but NIST and Los Alamos researchers are collaborating to make a 100-sensor array that could be deployed in the field, perhaps mounted on a cart or in a vehicle.
“The system isn't planned as a primary detection tool,” says NIST physicist Joel Ullom. “Rather, it is intended for detailed analysis of material flagged by other detectors that have larger collection areas but less measurement accuracy.” An array could be used by inspectors to determine, for example, whether plutonium is of a dangerous variety, whether nuclear fuel was made for energy reactors or weapons, or whether what appears to be radium found naturally in the environment is actually explosive uranium.
“People at Los Alamos are very excited about this work,” says Michael Rabin, a former NIST postdoc who now leads a collaborating team at Los Alamos. The Los Alamos National Laboratory operates and improves the capability to handle nuclear materials and sends scientists to participate in United Nations nuclear inspection teams.
An array of the new sensors might give inspectors new capabilities, such as enabling them to determine the plutonium content of spent reactor fuel without handling the fuel or receiving reliable information from the reactor's operators. Plutonium content can indicate whether a reactor is being used to produce weapons or electrical power.
The gamma ray detector is a variation on superconducting “transition edge” sensor technology pioneered at NIST laboratories in Boulder, Colo., for analysis of X-rays (for astronomy and semiconductor analysis applications) and infrared light (for astronomy and quantum communications). The cryogenic sensors absorb individual photons (the smallest particles of light) and measure the energy based on the resulting rise in temperature. The temperature is measured with a bilayer of normal metal (copper) and superconducting metal (molybdenum) that changes its resistance to electricity in response to the heat from the radiation.
To stop gamma rays, which have higher energy than infrared light and X-rays, the sensors need to be topped with an absorbent material. A layer of tin, 0.25 mm thick, is glued on top of each sensor to stop the gamma rays. The radiation is converted to heat, or vibrations in the lattice of tin atoms, and the heat drains into the sensor, where the temperature change is measured. NIST researchers have developed microfabrication techniques to attach absorbers across an array.
Researchers expect the 100-detector array to measure 1 square centimeter in size. The NIST team already has developed multiplexed readout systems to measure the signals from large sensor arrays, and recent advances in commercial refrigeration technology are expected to allow pushbutton operation of the system without liquid cryogens.
The ongoing research is funded by NIST and by the U.S. Department of Energy.
* J.N. Ullom, B.L. Zink, J.A. Beall, W.B. Doriese, W.D. Duncan, L. Ferreira, G.C. Hilton, K.D. Irwin, C.D. Reintsema, L.R. Vale, M.W. Rabin, A. Hoover, C.R. Rudy, M.K. Smith, D.M. Tournear, and D.T. Vo. 2005. Development of large arrays of microcalorimeters for precision gamma-ray spectroscopy. Published in The Conference Record of the IEEE Nuclear Science Symposium, Puerto Rico, Oct. 23-29, 2005. |
The IP header checksum is the 16 bit complement of the 1's complement sum of the header, where the header is treated as a stream of 16 bit "network shorts". That is, divide the header (consisting of an even number of bytes) into consecutive 16 bits network shorts. Add them up, but with using a 1's complement sum (there are excellent reasons for selecting 1's complement, but they don't belong in this node). Flip all bits in the result, and that's your checksum. The same calculation, but on different fields, is also used as a second checksum on some protocols. The most notable of these are TCP and UDP.
In 1's complement, there are 2 representations for zero: 0000000000000000 and 1111111111111111. Note that flipping the bits of the one gives you the other. A header checksum of "0" (allowed for some protocols, e.g. UDP) denotes that the checksum was not calculated. Thus, implementations which do calculate a checksum make sure to give a result of 0xffff rather that , when the checksum is actually zero.
Something to note: A header checksum of "0" is not handled correctly on some implementations. In particular, NATting may fail on some devices for packets with uncalculated checksums. For this reason, it's probably wise to avoid. Unfortunately (for them), users don't exactly get much choice in the matter.
The header checksum field is part of the IP header. When calculating what checksum to put on a header, the field is considered to be filled with a "0"; this is exactly equivalent to skipping it, but probably runs faster. When verifying the checksum, you can again take the complement of the 1's complement sum of the header -- including the filled-in value of the header checksum. This comes out 0 if the checksum is correct.
The nature of the header checksum means it cannot detect any re-ordering of the shorts in the header. But that's not a common failure, so there's little point in protecting against it. More common failures -- bit flipping, for instance -- can sometimes be detected by the header checksum. Also, the simple nature of the checksum means it provides no cryptographic defence whatsoever -- use a MAC for that.
In 2002, researchers invented/discovered a novel use of the header checksum for "parasitic computation": you can embed a problem in a header, such that only the correct solution of the problem is the correct checksum. Now fill in all possible values in the checksum, and send these packets out to web servers. A packet with a wrong checksum is dropped; a packet carrying the right answer in its checksum is (or rather, since IP is unreliable, might be) answered. Thus, the remote server(s) perform a computation on your behalf.
It is questionable whether the amount of CPU power "stolen" from the server in this manner is greater than that used by the "thief" to generate and test all packets. Thus, such an attack is (as yet?) imaginary rather than practicable. |
Gallon is a capacity to measure usually liquids, sometimes gas though. It is majorly used in British Imperial systems and US customary systems. A gallon is categorized under 3 definitions according to their different measurements, they are described below:
Imperial Gallon: The imperial gallon is widely used in UK countries. If we measure the gallon in pounds we will find it comes approximately to 10 pounds of water. In Commonwealth countries if we calculate the imperial gallon in liters, we will see it measures 4.54609 liters. It was widely used in U.K for measurement of volumes.
U.S. Liquid Gallon: Imperial gallon is heavier than U.S. Liquid gallon. U.S. gallon measure around 3.785 liters and approximately 231 cubic inches. The Liquid expands and contracts at a specific temperature, so overcoming the expansion and contraction for a gallon we should know how much a gallon will occupy at a specific temperature for the trade purposes it is very important. If we calculate U.S. liquid Gallon in pounds, it measures approximately 8.34 pounds at 62 °F.
U.S. Dry Gallon: It measures around 4.405 liters, it is not widely used in commerce purposes. Dry gallon is mostly used for measurement in Agriculture like (berries, grapes etc.)
U.S gallon is used for measurements by some Latin American countries and U.S. Many people use imperial gallon for measurement of volume, but it is no longer remained as an official tool. Most of the countries of Europe do not recognize the gallon and measures the volume in liters only. Countries bordering U.S have some idea of dealing with gallons like Canada.
Lets discuss different liquid and gas measurements of a gallon:
How much does a gallon of water weigh
At 62 °F a U.S. liquid gallon weighs around 8.3 pounds. The weight of a gallon can fluctuate as per the temperature because the density of the water can change accordingly. Because if you want to measure the energy you need to measure the temperature as they are correlated. When the energy is on higher side temperature will also become higher. If we take the case of water the, a higher energy will cause the water molecules to vibrate faster. Individual water molecules then push their neighboring water molecules and in turn all the molecules acquire lots of personal space. This is how the density of water can increase and decrease with the variation in temperature.
We can also take the example of a room when we can make a circle on the floor and accommodates 50 people in the circle. Assume the circle as a gallon and pack the people in the circle. Now infuse some energy into the room, you will find people moving here and there. Some are running out of the circle towards the corners of the room. Eventually you will notice that there are around 35 people in the circle, after the infusion of energy the others have moved out of the circle. As a result the space which was acquired by the individuals has increased. This example clearly demonstrates that how the temperature has the controlling power of the density of liquid, at 50 degrees of Fahrenheit the density of water will be 8.3450 lb/gal and at 200 degrees of Fahrenheit the density of water will be 8.0351 lb/gal.
How much does a gallon of milk weigh
As we all know that milk is an organic product so we can calculate the average value rather than exact values. The weight of the milk is slightly greater than the weight of water per gallon. The weight of milk is 3% more as compared to the weight of water. As the milk is already made up of about 87% of water that’s why it does not differ much as compared to milk. Weight per gallon of milk comes about 8.4 pounds for the milk that has 2% of fat and comes approximately to 8.6 pounds of whole milk. As stated above water weighs differently at different temperatures, it can be assumed that milk which is comprised of 80-90 per cent of water can weigh differently at different temperatures too. The density of molecule’s behavior in milk is very similar to the water. It varies with the percentage of fat and content of nutrients in the milk.
How much does a gallon of gas weigh
Water is more dense as compared to gasoline. Gasoline density not only varies with the variation of temperature but also the different seasons and weather of the year affects it too. Gasoline is not as pure as water, there are different chemicals who have blended into it, we can say it’s a mixture of chemicals. You can easily recognize the difference between water, gasoline from summer gasoline. Gasoline approximately has 87 per cent of hydrogen and 13 per cent of water. A gallon of gasoline weighs approximately 6.3 pounds, but it can vary as per the density of gasoline. If we calculate the mass of gasoline, according to the metric system we will find that it is between 2.8 to 2.9 kilograms depending upon the density of gasoline. If we burn the gasoline the molecule having a molecular weight of 12 combines with two molecules of oxygen having weight of 16. The carbon forms a substance which is four times heavier by combining with oxygen.
How many pounds in a gallon as discussed above depends upon lots of factors depending upon the nature of the substance which we are measuring. Density of molecules present in the substance plays a vital role in measurement. Measurement can vary depending upon the density and movement of molecules at different temeratures in the substance. Gallon is no longer recognized as an official method of measuring the liquids as did anciently. It is replaced by other measuring tools like liters. But in U.S and the bordering countries gallon is still recognized and used as a primary tool for measuring volume. |
According to PubMed Health, a consumer health Web site produced by the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at the National Institutes of Health (NIH), Graves disease is an autoimmune disorder that typically leads to hyperthyroidism, or overactivity of the thyroid gland.
Causes, Incidences, and Risk Factors
The thyroid gland is an important organ of the endocrine system. It is located in the front of the neck just below the voice box. This gland releases the hormones thyroxine (T4) and triiodothyronine (T3), which control body metabolism. Controlling metabolism is critical for regulating mood, weight, and mental and physical energy levels.
If the body makes too much thyroid hormone, the condition is called hyperthyroidism. (An underactive thyroid leads to hypothyroidism.)
Graves disease is the most common cause of hyperthyroidism. It is caused by an abnormal immune system response that causes the thyroid gland to produce too much thyroid hormones. Graves disease is most common in women over age 20. However, the disorder may occur at any age and may affect men as well.
- Breast enlargement in men (possible)
- Difficulty concentrating
- Double vision
- Eyeballs that stick out (exophthalmos)
- Eye irritation and tearing
- Frequent bowel movements
- Goiter (possible)
- Heat intolerance
- Increased appetite
- Increased sweating
- Menstrual irregularities in women
- Muscle weakness
- Rapid or irregular heartbeat (palpitations or arrhythmia)
- Restlessness and difficulty sleeping
- Shortness of breath with exertion
- Weight loss (rarely, weight gain)
Signs and Evidentiary Tests
Physical examination evidences an increased heart rate. Examination of the neck may show that the thyroid gland is enlarged (goiter).
Other tests include:
- Blood tests to measure levels of TSH, T3, and free T4
- Radioactive iodine uptake
The purpose of treatment is to control the overactivity of the thyroid gland. Beta-blockers such as propranolol are often used to treat symptoms of rapid heart rate, sweating, and anxiety until the hyperthyroidism is controlled. Hyperthyroidism is treated with one or more of the following:
- Antithyroid medications
- Radioactive iodine
If you have radiation and surgery, you will need to take replacement thyroid hormones for the rest of your life, because these treatments destroy or remove the gland.
Some of the eye problems related to Graves disease usually improve when hyperthyroidism is treated with medications, radiation, or surgery. Radioactive iodine can sometimes make eye problems worse. Eye problems are worse in people who smoke, even after the hyperthyroidism is cured.
Sometimes prednisone (a steroid medication that suppresses the immune system) is needed to reduce eye irritation and swelling.
You may need to tape your eyes closed at night to prevent drying. Sunglasses and eyedrops may reduce eye irritation. Rarely, surgery or radiation therapy (different from radioactive iodine) may be needed to return the eyes to their normal position.
Expectations or Prognosis
Graves disease often responds well to treatment. However, thyroid surgery or radioactive iodine usually will cause hypothyroidism. Without getting the correct dose of thyroid hormone replacement, hypothyroidism can lead to:
- Mental and physical sluggishness
- Weight gain
Antithyroid medications can also have serious side effects.
Complications from surgery may include:
- Hoarseness from damage to the nerve leading to the voice box
- Low calcium levels from damage to the parathyroid glands (located near the thyroid gland)
- Scarring of the neck
- Eye problems (called Graves ophthalmopathy or exophthalmos)
Heart-related complications, including:
- Rapid heart rate
- Congestive heart failure (especially in the elderly)
- Atrial fibrillation
Thyroid crisis (thyrotoxic storm), a severe worsening of overactive thyroid gland symptoms
Increased risk for osteoporosis, if hyperthyroidism is present for a long time
Complications related to thyroid hormone replacement:
- If too little hormone is given, fatigue, weight gain, high cholesterol, depression, physical sluggishness, and other symptoms of hypothyroidism can occur
- If too much hormone is given, symptoms of hyperthyroidism will return
Calling your health care provider
Call your health care provider if you have symptoms of Graves disease. Also call if your eye problems or general symptoms get worse (or do not improve) with treatment.
Go to the emergency room or call the local emergency number (such as 911) if you have symptoms of hyperthyroidism with:
- Decrease in consciousness
- Rapid, irregular heartbeat
Graves Disease and Disability
If you have Graves Disease and it keeps you from being able to do any type of full time work, then you may be eligible for disability benefits. Mr. Ortiz has experience in handling claims involving Graves Disease. Call him at 850-308-7833 for a free case evaluation. |
Three Preschool Crafts Teaching the Ten Commandments
Teaching students about the Ten Commandments can be tough. After all, how many preschoolers understand the words “covet,” “adultery,” or “idolatry.” These three fun crafts can give students something tangible to help them relate to these important laws.
This craft is perfect for students who are just learning their numbers. Cut out two “tablet” shapes by rounding the top edge of a piece of paper. Give each student two tablets and help them write the numbers 1-5 on one tablet and 6-10 on the second tablet. Then let them decorate their tablets with art supplies. Help them to tape the two tablets together when they’re finished, and let them act out Moses coming down from Mount Sinai carrying the tablets. They can even act out Moses throwing down the tablets after seeing the golden calf.
Draw a Commandment
Have students draw a picture of their “favorite” commandment of the Ten commandments. These seem to be the easiest (and safest!) for students to draw:
·Commandment 2: Do not serve idols.
·Commandment 5: Honor your father and mother.
·Commandment 6: Do not kill.
·Commandment 8: Do not steal.
·Commandment 10: Do not covet (desire) what someone else has.
Scroll of Commandments
Write or type the commandments on a piece of paper and make enough copies for each student to have one. Show them how to tear the edges of the paper. Then make a strong mixture of tea and allow it to cool. Help your students dip their papers into the mixture and lay them out to dry. (Be careful – the mixture will stain!) When they dry, show students how to roll them into the shape of a scroll and use a paper clip to keep them rolled for a day or two. When you remove the paper clip, the paper will look like an old scroll, yellowed with age.
These preschool crafts will bring the story of the Ten Commandments down to your students’ levels. Try them out, and see how your students become engaged in learning about the commandments. |
It covers the history of ocean exploration from 16th century Portuguese mariners to 20th century explorers and includes a cornucopia of rare and beautiful maps of the Pacific Ocean, in particular, of Hawaii, Tahiti, Australia and New Zealand, among other Pacific Islands and territories.
Early Mapping of the Pacific traces the exploration and charting of the great ocean through cartography, following the story from classical times through the turn of the twentieth century, telling the tales of seafarers who ventured eastward from Asia and were the Pacific’s greatest explorers.
The Pacific Islands and Their People
Mariners, Mapmakers and the Great Ocean
The Pacific Evolves after Magellan
In the Wake of the Solomon Islands
Earliest Mapping of Australia and New Zealand
The Age of Enlightenment
The Three Voyages of James Cook
The Discovery of Tahiti and Hawaii
Micronesia, the Elusive Isles
Surveyors, Whalers and Missionaries |
What is a pellet mill?
- The definition
A pellet mill (also called pellet machine, pellet press) is a type of mill or machine which is used to make pellets from powdered materials. A pellet mill is usually consisted of pellet die and rollers. The rotating rollers or the rotating dies force the feedstock trough the die holes to form pellets. (See the picture of how the pellets are produced by a pellet mill).
- The function of a pellet mill
Without any process, biomass from plants can be used to produce alternative fuel for fossil fuels. However the natural biomass with low densities, i.e. 80-150 kg/m3 for herbaceous and 150-200 kg/m3 for woody biomass, limits their application in energy production. Then the densification of these materials are needed to increase the bulk densities. Pelletizing is one of the methods to realize densification. |
Creator: Meghan Zigmond (@ZigZagsTech)
Suggested Grade Level: Elementary
Subject Area: Math
“In Meghan’s first grade class, students needed to demonstrate their understanding of numbers from 1-100 by showing the standard and expanded forms of these numbers as well as a picture to represent the concept. Her students also needed to demonstrate their understanding of the greater than and less than concept and symbols. To do this, each student chose two numbers to compare and then used an iPad plus the Number Pieces Basic app to build their numbers with base ten blocks. They then took a screen shot of each number and brought it into Popplet in order to illustrate the concepts of greater than/less than as well as their number sentences. Her students completed this project in small groups at a math station table. It provided a great opportunity for discussions and observations on students’ true understandings.
To see details on the lesson plan, photos, and step by step directions to create on Meghan’s Blog.
Extending the Project
Popplet Lite could be used in conjunction with Number Pieces Basic to create any number of graphic organizers related to place value and number sense. Another option would be to combine Popplet with a different math resource app such as GeoBoards to illustrate difference concepts. This project can further be extended by adding an app like Explain Everything, Educreations, ScreenChomp, or Draw & Tell where students could then explain their thinking. For students who do not have access to iPads, a similar activity could be created using the web-based version of Popplet (popplet.com). |
posted by Amanda .
An urn contains six red balls and five white balls. A sample of 2 balls is selected at random from the urn. What is the probability that at least one of the balls is red?
This is the probability of one red ball or two red balls.
One = 6/11 * 5/10 = ?
Two = 6/11 * 5/10 = ?
Either-or probability is found by adding the individual probabilities. |
Let us consider four disks intersecting as in the figure. Each of the three shapes formed by the intersection
of three disks will be called a petal.
Write zero or one on each of the disks. Then write on each petal the remainder in the division by two of the sum
of integers on the disks that contain this petal. For example, if there were the integers 0, 1, 0, and 1 written
on the disks, then the integers written on the petals will be 0, 1, and 0 (the disks and petals are given in the
order shown in the figure).
This scheme is called a Hamming code. It has an interesting property: if you enemy changes secretely any
of the seven integers, you can determine uniquely which integer has been changed. Solve this problem and you will
know how this can be done.
The only line contains seven integers separated with a space, each of them being zero or one. The first four
integers are those written on the disks in the order shown in the figure. The following three integers are those
written on the petals in the order shown in the figure
Output one line containing seven integers separated with a space. The integers must form a Hamming code. The set
of integers may differ from the input set by one integer at most. It is guaranteed that either the input set is
a Hamming code or a Hamming code can be obtained from it by changing exactly one integer.
0 1 0 1 1 0 1
0 1 0 0 1 0 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
Problem Author: Sofia Tekhazheva, prepared by Olga Soboleva
Problem Source: Ural Regional School Programming Contest 2010 |
Ans: Plastic comb gets electrically charged due to rubbing & therefore it attracts tiny pieces of paper (which are neutral). As charged body can attracts an uncharged body.
Q2. Which of the following cannot be charged by friction, if held by hand?
a) a plastic scale b) a copper rod c) an inflated balloon d) a woolen cloth. and Why?
Ans: Copper rod.
Except copper, the other three are insulators whereas copper is a conducting object. As soon as it gets charged by rubbing with another material, the electric charge produced on its surface flow through our hand & body into the earth. And it remains uncharged.
Q3. What kind of electric charge is acquired?
a) by a glass rod rubbed with silk cloth? b) by a plastic comb rubbed with dry hair?
Ans: a) positive charge. b) Negative charge.
Q4. A negatively charged object attracts another charged object kept close to it. What is the nature of charge on the other object?
Ans: Positive Or Neutral (uncharged).
Q5. A negatively charged object repels another charged object kept close to it. What is the nature of charge on the other object?
Ans: Negative charge.
Q6. Mention three ways by which a body can be charged.
Ans: Three ways are:
NOTE: i) When two bodies are charged by rubbing, they acquire equal & opposite charges.
ii) The body which loses electrons acquires positive charge whereas the body which gains electrons acquires negative charge.
b) Charging by conduction: Charging a neutral body by bringing it in contact with a charged body is called charging by conduction.
c) Charging by induction: Charging a neutral body by bringing it near a charged body is called charging by induction.
Q7. What is an electroscope? Explain its construction.
Ans: An electroscope is a device for detecting, measuring & finding the nature of a charge.An electroscope consists of a large jar. A metal rod is fitted into the mouth of the jar with the help of the cork. At the lower end of the metal rod a pair of thin leaves of gold or aluminium is suspended.
Q8. What are the uses of an electroscope?
Ans: An electroscope can be used for following purposes:
a) To detect & measure the charge on a body.
b) To determine the nature of charge on a body.
Q9. How would you use an electroscope to find out whether an object is charged or not?
Ans: Touch the body to be tested with the metal disc of an electroscope. If the leaves of an electroscope open up (diverge), the body is charged. If the leaves remain unaffected, the body has no charge.
Note: The extent of divergence (opening apart) of the leaves is a measure of the charge on the body. A body carrying higher charge will cause greater opening up of the leaves.
Q10. How would you use an electroscope to determine the nature of charge of a charged body?
Ans: Charge the electroscope with a known charge, say with negative charge, by touching a negatively charged ebonite rod to the metal disc of the electroscope. The leaves of the electroscope open up (diverge).
Now touch the body to be tested with the metal disc of the charged electroscope.
- If the divergence of the leaves increases, the body has similar charge that is the given body is also negatively charged.
- If the divergence of the leaves decreases, the body has unlike charge that is the given body is positively charged.
Ans: After rubbing, plastic comb acquires negative charge. Now when it is touched with the metal cap of an electroscope then both the metal cap & the leaves acquire negative charge due to conduction. Because of negative charge on both the leaves, divergence of leaves takes place.
Q12. What happens when we touch the metal cap of a charged electroscope with our finger? What is this process known as?
Ans: The leaves of an electroscope collapse as soon as we touch the metal cap with hand because the leaves of the charged electroscope lose charge to the earth through our body (in other words leaves are discharged). This process is known as EARTHING.
NOTE: The process of transferring of charge from a charged object to the earth is called Earthing.
Q13. What is the nature of charge a) on the metal cap b) on the leaves of an uncharged electroscope
when a negatively charged body is brought in contact with its metal cap?
Ans: a) Negative b) Negative
Q14. What is the nature of charge a) on the metal cap b) on the leaves of an uncharged electroscope
when a negatively charged body is brought near its metal cap (not in contact withmetal cap).
Ans: a) Positive b) Negative
Q15. Touch the disc of an electroscope first with glass rod rubbed with silk & then with ebonite rod rubbed with fur. What do you observe & why?
Ans: After rubbing, glass rod acquires positive charge. Now when it is touched with the metal cap of an electroscope then both the metal cap & the leaves acquire positive charge due to conduction. Because of positive charge on both the leaves, divergence of leaves takes place. Electroscope is now positively charged.
After rubbing with fur, ebonite rod acquire negative charge & when this negative rod is touched with the metal cap of the above positively charged electroscope then collapsing of leaves takes place as this negative charge starts neutralizing the positive charge already present on the leaves.
Q16. Touch the disc of electroscope with an ebonite rod rubbed with fur. Now bring a glass rod rubbed with silk close to the disc of this electroscope. What do you observe?
Ans: After rubbing, ebonite rod acquires negative charge. Now when it is touched with the metal cap of an electroscope then both the metal cap & the leaves acquire negative charge due to conduction. Because of negative charge on both the leaves, divergence of leaves takes place. After rubbing with silk, glass rod acquire positive charge & when this positive rod is brought near the metal cap of the above negatively charged electroscope then due to induction positive charge gets induced in the leaves as a result collapsing of leaves takes place.
18. What is seismograph?
19. List two places in India which are most threatened by earthquake.
20. What are tectonic plates?
21. What is a lightning conductor?
22. What is earthing?
24. Explain the process of an electric discharge?
25. Draw the diagram of an instrument, which can be used to detect the charge on a body. How it can be charged through conduction?
26. Suppose you are outside your home and an earthquake strikes. What precaution would you take to protect yourself?
2. Do not use elevators if they are available at some place outside your house.
3. If you are in a car or a bus, do not come out anddrive slowly to a clear spot. Stay inside a cartill the tremors stop.
27. Suppose you are at your home and an earthquake strikes. What precaution would you take to protect yourself?
1. Take shelter under a table and stay there only, till the shaking stops.
2. Stay away from the objects which are tall and heavy, that may fall on you.
3. If you are on bed, do not get up and remain there only andprotect your head with pillow.
28. What is earthing? Why earthing is provided in buildings?
Ans: The process of transferring of charge from a charged object to the earth is called earthing. Earthing is provided in buildings to protect them from electrical shocks due to any leakage of electrical current. For our safety, most of the electrical appliances and the mains of the house are connected to earth, so that we can be prevented from getting an electric shock.
29. A crackling sound is heard while taking off sweater during winters. Explain ?
Ans:As we know that electrical charges that are generated through friction are static, i.e they do not move by themselves and Motion of charges constitutes an electric current. When we take off our sweater there is a motion between the charges on the sweater and our body that produces electric current,which produces a crackling sound. Infact we can see a spark if we take off the sweater in the dark.
After some time he noticed that small pieces of strings were beginning to stand apart like the hair on the back of a scared dog. He then brought his hand close to the key and received a tingle of an electric shock from the key.As the rain came down an the string became soaked the electricity began to conduct freely throug the key. |
There is a proverb in English that goes like this : “Birds of a feather flock together”. It means that people or things of the same kind tend to do things together.
I was inspired by this proverb when writing this post . We will be looking at verbs that all mean to LOOK. Let’s say that words of a feather should be studied together!
- I looked at her when she spoke to me. (i.e. direct eyes in a particular direction)
- I watched TV last night.(i.e. to look at something with attention for a period of time)
- I glanced over at my boyfriend who was chopping a carrot. (i.e. to look at someone very quickly)
- My child gleamed when he saw his new bike.(i.e. positive emotion that can be seen in someone’s eyes)
- His supervisor glared at her when she accidentally told the secret.(i.e. to look directly at someone angrily)
- I was staring out the window when they came in.(i.e. to look at someone or something for a long time with wide eyes)
- The young guy gazed at his teacher. (i.e. to look at someone for a long time with admiration, love or interest)
- I took a peek to see what they were doing in the room.(i.e. to look at something secretly from a hidden place.)
Listen to the pronunciation of the words in bold. Try to repeat. You can even use the recording as a dictation if you need extra listening practice.
Fluency Builder: Answer the following questions in class or with a friend. Speak a little English today.
- How much TV do you watch? What’s your favorite show?
- Have you glared at anyone recently? Why?
- What was the most special gift you received as a child?
- Do you daydream a lot?
- Have you ever had a crush on a friend’s older brother or sister?
- Do you take a peek at the Christmas gift tags under the tree a few days before Christmas?
p.s. Thank you MM for inspiring me this morning during our lesson. |
Pit vipers (family Crotalidae) are common snakes in the US. Their bites are responsible for 99% of the 300,000 estimated venomous snake bites sustained by domestic animals every year. Pit vipers include rattlesnakes, copperheads and cottonmouths. Rattlesnakes are reportedly the most common biters among these.
In the US, dogs tend to encounter rattlesnakes most commonly between April and October, when the warmer weather makes their exposure a more likely occurrence. Dogs will be bitten either after accidentally encountering a snake in the brush or after spying them and attacking (because they sense a threat or as a result of a strong predatory drive).
Symptoms and Identification
The venom these snakes inject into their victims is a strong neurotoxin (nerve toxin) and hemotoxin (blood cell toxin). Different kinds of rattlesnakes carry different types and strengths of venom and some may inject no venom at all.
Acute swelling, one or two puncture wounds, bleeding and pain at the site (limping or flinching when the area is touched) are the most common signs that a dog has been bitten. The face and extremities are the most typical sites.
The hemotoxin in the venom will destroy blood cells and skin tissue and will result in severe localized tissue swelling and possibly even internal bleeding. Severe reactions to the hemotoxin include severe swelling tissue necrosis (purpling and blackening of the surrounding tissue), and a drop in blood pressure.
Dogs who receive more of the neurotoxin in the venom tend to experience more life-threatening reactions, including rapid paralysis that may affect the respiratory muscles.
The severity of the reactions tend to be dependent on the dose a dog receives. In fact, in about 20% to 30% of cases, dogs will receive “dry” bites and the resulting signs are relegated to the possibility of a skin infection at the bite site. However, a reported 5% of dogs die as a result of rattlesnake bites.
History of contact with a rattlesnake and clear signs of a bite –– either because of the telltale wounds or characteristic tissue damage –– are how most dogs are diagnosed with the possibility of rattlesnake envenomation.
Any breed of dog is susceptible to the effects of rattlesnake venom. Some dogs, however, are more likely to have a high drive to attack these animals. Dogs with high prey drives and rural or hunting lifestyles are more likely to find themselves in harm’s way when it comes to rattlesnake envenomation.
Treatment of rattlesnake envenomation depends upon the amount and type of venom the animal has been exposed to. In general, however, the faster an animal is seen by a veterinarian, the greater the chance of survival and the fewer complications they’ll experience.
Treatment tends to focus on the following steps:
Step #1: Prevent or delay absorption of venom. Cleaning and flushing of the wound is imperative. However, infiltrating the bite area with saline may contribute to its dispersal so cleansing is best achieved only superficially. Sucking the venom out is no longer recommended.
Step #2: Veterinarians will neutralize any absorbed venom by injecting antivenin. Poison control centers will provide antivenin availability information in the event rattlesnake envenomation is uncommon in your area.
Step #3: Supporting respiration and counteracting the effects of the toxin and maintaining cardiovascular function is crucial. Intravenous fluids are critical here. Corticosteroids may help lessen tissue destruction but their implementation is considered controversial. Broad-spectrum antibiotics, however, are always recommended. Pain relievers, too, play an important role in an animal’s overall comfort and help raise the probability of survival in severe cases.
More intensive care options may be necessary. Blood transfusions and even ventilator care may be indicated in some cases.
The cost of treatment for rattlesnake envenomation depends greatly on the amount and type of venom the animal has been exposed to as well as on the length of time it takes to receive veterinary help (delaying veterinary assistance increases the number of expensive complications).
Care for rattlesnake envenomation can be very inexpensive for animal who have received “dry” bites. These dogs can be helped for the cost of bite treatment and antibiotic therapy (often under $100 or $200).
If severe, life-threatening complications ensue, however, dogs may require intensive care in a specialty setting. In the event this is necessary, or should tissue damage be so extensive that follow-up surgeries are required, expenses may run into the many thousands for treatment of one single rattlesnake bite.
Preventing exposure to rattlesnakes is the best approach in all cases. Here are some tips experts recommend owners follow:
Avoid hiking with dogs during peak times of the year (April through October).
- Stay away from areas with tall grass, rocks or wood piles.
- Stay on trails and keep dogs leashed at all times.
- Keep pets away from rattlesnakes if they’re encountered as they can strike up to a distance one-half of their length.
- Using a walking stick to rustle bushes along trail helps alert snakes of your presence and keeps them away.
- Around your house, remove all food sources (such as rodents) and minimize hiding places (such as wood piles).
Given the rural, active lifestyles of some dogs, however, this may not be realistic. In those cases, a vaccine may be a reasonable alternative for dogs who encounter these snakes on a regular basis, especially if they tend to be far away from veterinary care.
Unfortunately, there’s only scant evidence to support the efficacy of this vaccine. Nonetheless, it’s considered relatively safe and affordable and may be helpful for some dogs. Vaccinated or not, dogs who are bitten by snakes should be seen by a veterinarian as soon as possible!
Clark R, Selden B, Furbee B: The incidence of wound infection following crotalid envenomation. J Emerg Med 1993; 11: 583–586.
Kerrigan K, Mertz B, Nelson S, et al: Antibiotic prophylaxis for pit viper envenomation: prospective controlled trial. World J Surg 1997; 21: 369–372.
Malasit P, Warrell D, Chanthavanich P: Prediction, prevention, and mechanism of early (anaphylactic) antivenom reactions in victims of snakebites. Br Med J 1986; 292: 17–20.
McNalley J, Dart R, O'Brien P: Southwestern rattlesnake envenomation database (abstract). Vet Hum Toxicol 1987; 29: 486.
Peterson M, Meerdink G: Venomous bites and stings. In Kirk R (ed): Current Veterinary Therapy X. Philadelphia, WB Saunders, 1989, pp. 177–186.
Russell F, Ruzic N, Gonzales H: Effectiveness of antivenin (crotalidae) polyvalent following injection of crotalus venom. Toxicon 1973; 11: 461–464.
Snyder C, Knowles J, Pickens J, et al: Snakebite poisoning. In Catcott E (ed): Canine Medicine. Santa Barbara, American Veterinary Publications, 1968, p. 256.
Stewart M, Greenland S, Hoffman J: First-aid treatment of poisonous snakebite: Are currently recommended procedures justified? Ann Emerg Med 1981; 10: 331–335. |
Recognizing that fuels and vehicles work together as a system, the greatest benefits can be achieved by combining lower sulphur fuels with appropriate vehicle and emission control technologies. This approach has proven to be more effective than treating fuels, engines, or emission controls separately.
Car manufacturers are continuing to improve the design of engines to improve fuel efficiency and reduce emissions. For example, diesel engines with high pressure injection systems are more efficient and less polluting. However, these recent diesel engine technologies do not function well with high levels of sulphur fuel. For the latest information on auto fuel quality, including fuel sulphur levels, visit UNEP’s transport information hub.
Reducing emissions from motor vehicles is an important component of an overall strategy for reducing air pollution, especially in cities in developing and transitional countries where population and vehicle ownership are growing rapidly.
One essential component of reducing vehicle emissions is to eliminate lead from petrol; in addition to being a toxic pollutant in its own right, the presence of lead in petrol also inhibits the functioning of catalytic converters and other emission control technologies. Low sulphur fuel (both diesel and petrol, 500 ppm or less) is essential for lower emissions of PM and SOx, in addition to being a requirement for emission filters and advanced emission controls.
Most modern petrol fuelled vehicles, including Hybrid Electric Vehicle (HEV), require unleaded petrol because of the irreversible damage lead causes to emission control technologies such as catalytic converters. One of the goals of the UNEP-based Partnership for Clean Fuels and Vehicles (PCFV, www.unep.org/transport/pcfv) is to phase out leaded petrol globally.
Both conventional vehicles and advanced technology vehicles (e.g. hybrid electric vehicles) with catalytic converters can be used with high sulphur petrol fuel as long as the fuel is unleaded. However, emission reduction technologies have a better efficiency with low and ultra-low sulphur fuels. The only technical requirement is unleaded fuel in order to ensure proper function of the catalytic converter.
This is very promising for the introduction of HEVs to developing countries, as unleaded petrol fuel is now available in most countries. Since fuel requirements set by car importers and car manufacturers can differ from region to region, one should check the requirements set by them to ensure the vehicle warrantee is maintained. If modern emission control technologies are used, e.g. NOx traps or Diesel Oxidation Catalyst, low sulphur fuels (500 ppm or less) will be required.
Using diesel with lower levels of sulphur will reduce the emissions of sulfate, sulphur dioxide, and particulate matter (PM) substantially and will enable the introduction of advanced emission control technologies.Sulphur occurs naturally in crude oil. The level of sulphur in diesel depends upon the source of the crude oil used and the extent to which the sulphur is removed during the refining process. While Western European, North American and a few Asian markets use ultra low sulphur fuels (50 ppm or less), sulphur levels as high as 5,000 to 10,000 ppm in diesel are still in use in developing and transitional countries. Diesel fuel with more than 500 ppm inhibits the use of any emission control technology available today, poisoning catalysts and particulate filters.
Desulphurized diesel can be classified in the following categories, along with the emission control technologies enabled at each level:
Sulphur greatly reduces the efficiency of more advanced catalysts by blocking active catalyst sites; this effect is not completely reversible. Although conversion efficiency will improve with the use of low sulphur fuel (500 ppm or less), it does not always return to its original effectiveness after desulphurization.
Optimal clean diesel vehicle function depends on the availability of near sulphur free diesel (<15 ppm) in order to attain specified emission levels and emission technology durability. As vehicle manufacturers might decline warranty demands when higher sulphur fuels are used, ensuring adequate fuel quality for correct vehicle function is important.
The infrastructure needed for cleaner and alternative fuels can be a major concern when it comes to new vehicle technologies. However, for diesels and hybrid electric vehicles, additional physical infrastructure is not needed as conventional fueling and distribution systems are used. In cases where a number of fuel grades are available on one market, adequate segregation in distribution and transport is essential for quality maintenance. |
In my classroom I used the strategy described in this blog entry to teach students scientific method and how to plan an experiment.
In any science lesson one of the important over-arching outcomes is to teach students to behave like a scientist – to use the scientific method to ask questions and come to valid conclusions. The scientific method is a way for students to improve their Management of Impulsivity.
In many classrooms students engage in scientific method – through their practical exercises – without actually learning this method. The theory seems to be if students go through the process often enough they will learn scientific method. Unfortunately my experience has been that most students blindly follow the “recipe” of the experiments, without really improving their understanding of scientific method.
Here’s what I did to get around this problem.
Early in the term I teach students the basic structure of an experiment, typically: Aim, Hypothesis, Equipment, Method, Results, Discussion and Conclusion. From then on I would give out practical exercises in 6 parts, requiring students to “put the prac together” before beginning. They quickly learnt that you couldn’t have a conclusion before you’d recorded your results and discussed them. Nor could you engage in the method without working out the equipment.
As the term progressed, and the basic structure of the experiment was mastered, I began cutting up the method section. Asking students to plan the experiment to a small degree.
In the beginning this was as simple as cutting the method into two sections and asking students to work out where they should begin and where they should end.
As time went on I slowly gave students more control over the planning. I began to cut the steps in the method into more parts. I deleted the “Equipment” section completely requiring students to work out what equipment they would need based on the method outlined.
Eventually I began to delete instructions. Instructions about how to record data, or the inclusion of a constant in the experiment were omitted entirely. I was always seeking keep students in their Goldilocks Zone, pushing them one small step further after a part of the planning had been mastered.
Over the course of one semester students slowly became very competent in scientific method – certainly better than any previous group. The process added very little time to each class and saved a full 6 weeks of a unit where I had previously taught scientific design as a topic.
Over all a very simple, practical and easy teaching strategy that resulted in a great improvement in the students’ ability to Managing their Impulsivity and a much better grasp of scientific method. |
Class Session 16>
I. Chemicals and Hazardous Waste
The U.S.E.P.A. defines a hazardous waste as any solid, liquid, or containerized gas that has one or more of the following properties:
1. ignitability - a waste the easily catches fire such as waste oils, organic solvents and PCB's. These include liquids with a flash point (the temperature at which vapor easily ignites in air) of less than 140 degrees Fahrenheit, materials that burn vigorously and persistently when ignited so that they create a hazard, and ignitable compressed gases.
2. corrosivity - a highly acidic of highly alkaline waste or one that corrodes steel easily. This includes both aqueous waste with a ph of less than or equal to 2.0 or greater than or equal to 12.5 and liquid wastes that corrode steel at a rate equal to or greater than 0.25 inches per year at a test temperature of 130 degrees F.
3. reactivity - a highly unstable waste that can cause explosions or toxic fumes or vapors. This includes materials that react violently with water, those that form potentially explosive mixtures when combined with water, will generate toxic gases, fumes, or vapors in quantities sufficient to endanger human health or the environment and materials that are capable of detonation or explosive reactions if subjected to a strong initiating source or if heated under confinement.
4. toxicity - a waste in which hazardous concentrations of toxic materials can leach out and pose a danger to human health or the environment. Hazardous waste can cause a wide range of harmful effects on human health as well as long term or permanent damage to the environment.
It is very difficult to estimate
how much hazardous waste is produced worldwide, or even in one country. There
is no reliable estimate of global production. Estimates range from 375 million
metric tons to 500 million tons for the 19 most industrialized countries. One
thing is clear, however, the
Ninety percent of the hazardous
Most hazardous wastes are synthetic organic and inorganic chemicals which are used for all kinds of purposes in industrialized countries such as cleaners, solvents, degreasers, insecticides, coatings and paints. The number of synthetic chemicals produced around the world now stands at about 80,000, with between 500 and 1,000 being added every year. Very little data exists on the toxic effects of about 80% of these chemicals and complete data exist for only 2%.
Hazardous wastes are disposed
through various land-based technologies that include containerized burial, open
pit, pond or lagoon, pile, deep well injection, and others. Unfortunately, many
of these disposal techniques are, in actuality, storage techniques rather than
disposal. In the
Only part of the problem is
current disposal of hazardous waste. There are many inactive hazardous waste
sites across the
Several problems can result with hazardous waste disposal. These include:
1. local citizen resistance to the siting of hazardous waste disposal facilities, either incinerators or landfills;
about accidents that occur during the shipment of hazardous materials. In the
dumping, which often occurs. There is lots of incentive to dump hazardous waste
illegally. Waste disposal costs in the
4. shipments of hazardous materials to other countries, many of which are non-urbanized, non-industrialized countries.
The Resource Conservation and Recovery Act, first passed in 1976 and amended in 1984, requires EPA to identify hazardous wastes, set standards for their management, and provide guidelines and financial aid to establish state hazardous waste management programs. All firms that store, treat, or dispose of more than 100 kg (220 pounds) of hazardous wastes per month must have a permit stating how such wastes are to be managed. To reduce illegal dumping, hazardous waste producers granted disposal permits must use a "cradle to grave" manifest system to keep track of waste transferred from point of origin to approved offsite disposal facilities. Keeping track of all this waste, generators and haulers is an enormous task which costs billions of dollars annually.
Inactive, abandoned or old waste sites are handled under EPA's Superfund program. The Superfund program manages a large pot of money used to clean up these old sites. 1n 1989, the EPA estimated that there were over 31,500 sites in the United States containing potentially hazardous waste, with this number increasing at a rate of 2,500 per year. By July 1989, EPA had placed 1,224 sites on a priority cleanup list because of their threat to nearby populations and the priority list is growing at a rate of about 180 per year. By mid 1989, EPA had spent 4.5 billion to start cleanups at 257 priority sites, but only 50 sites had been cleaned and 27 declared clean enough to be removed from the list. EPA estimates that the agency can only clean up about 25 to 30 cleanups a year. At that rate, it would take 41 to 50 years to clean up the 1,224 priority sites listed in 1989. The Office of Technology Assessment estimates that the final list may contain 10,000 sites, with cleanup costs amounting to as much as $300 billion over the next 50 years.
Once the hazardous wastes are deposited in or on the land, engineered control and treatment technologies are the only available option to control the pollution potential. Hazardous wastes can be stabilized, neutralized, or in some other fashion rendered less hazardous. In some cases, the waste are pumped out of the ground and then treated, usually by high temperature incineration.
There are, however, a couple of other options. One is to not use or generate a hazardous chemical to begin with, thereby eliminating the need for disposal. This is referred to as pollution prevention. The concept of a proactive pollution prevention program provides the greatest potential to reduce the impact of society on our limited land and natural resources. Such programs can result in substantial waste stream reduction, health and environmental protection, and cost savings by cutting raw material losses, lowering pollution control costs, and reducing future liability.
Another option is for the generator of hazardous waste or chemicals to transfer their waste or chemical to a facility that uses the material as in input into their processes. This type of approach, which is usually accomplished by a hazardous waste clearinghouse is part of a new movement referred to as industrial ecology. Industrial ecology seeks to have industrial systems mimic natural biological systems in which materials flow between different organisms and no waste is generated or requires disposal.
Prior to the late 1980's there was a booming international trade in hazardous waste. Poor countries were accepting hazardous materials and waste from industrialized countries for hard currency. Rather than deal with existing environmental regulation in their own country, many companies were only too happy to export their hazardous waste. This practice has been curtailed thanks to an international treaty, signed in 1989, to control export of hazardous waste. This treaty specified that the government in the recipient country must give permission for the waste to be imported into their country. Further, in 1994, the countries that make up the OECD agreed to stop dumping their wastes in poor countries. The ban on exports for burial and incineration of hazardous waste was made effective immediately and hazardous waste exports for recycling became illegal as of 12/31/97.
II. Chemicals – Case Study – Chloroflourocarbons
From the week seven notes, we learned that the homosphere, or lower atmosphere, is divided into three layers. The troposphere, closest to the earth, is where daily weather phenomenon occurs and where air pollution and acid rain is distributed. The second layer out is the stratosphere. The stratosphere sits some 11 to 30 miles from the earth’s surface. As 90% of the gas molecules in the atmosphere are within the first ten miles, the air in the stratosphere is very thin.
Despite the thinness of the stratosphere, however, there is one gas located there which performs a critical function to life on earth. The gas is ozone, or O3. Ozone filters ultraviolet radiation from the sun. Various forms of energy have different wavelengths. The wavelength of light energy, for example, is shorter than that of heat energy. Ultraviolet, or UV, energy has a wavelength that is shorter than light energy. Even UV energy itself is divided into three types, based on wavelength with UVC being the shortest, UVA the longest and UVB in the middle.
Humans need a small amount of ultraviolet radiation to maintain health. Ultraviolet radiation activates vitamin D in the human body, which assists the intestines in absorbing minerals. Humans, as well as other life forms, can tolerate radiation through the UVA range, but radiation with shorter wavelengths, such as UVB and UVC is harmful. Oxygen molecules absorb the shortest and most harmful UVC radiation and ozone absorbs most of the remainder before it reaches the earth’s surface. Ozone, a molecule containing three oxygen atoms, is made when the shortest wavelengths of UVC are absorbed by oxygen and break apart into two oxygen atoms. These atoms then combine with 02 molecules to form stratospheric ozone and it is these O3 molecules that shield the surface from too much ultraviolet radiation.
Stratospheric ozone depletion occurs when O3 molecules interact with chlorine-based compounds such as chlorofluorocarbons, also known as CFCs, and halons. Chlorofluorocarbons are synthetic compounds containing chlorine, fluorine and carbon. CFCs have been used in a wide variety of consumer and commercial applications such as refrigeration, air conditioning, foam production, aerosol propellants, and circuit board cleaning. Halons are another class of synthetic chemicals which are used to extinguish fires.
Both CFCs and halons are extremely long-lived and stable chemicals that can remain chemically active in the atmosphere for decades. Not only do CFCs and halons destroy the molecular bonds of the O3 molecule, but also a single chlorine molecule can eliminate as many as 100,000 ozone molecules. Halons contain bromine and are even more potent ozone destroyers than CFCs.
The result of ozone destruction is a gradual thinning of the stratospheric ozone layer. Over the past 20 years, ozone levels above the Antarctic have dropped by almost 50%, resulting in an “ozone hole”. Every year, beginning in September, ozone levels in the stratosphere above the Antarctic begin to decline. As they decline, more and more ultraviolet radiation reaches the earth’s surface. Scientists believe that a 1% drop in ozone accounts for a 2% increase in ultraviolet radiation at the earth's surface.
Over time the Antarctic ozone
hole has gotten larger. In September 2003, the World Meteorological Association
reported that the 2003 hole equaled the all time record set in September 2000.
Over the past decade, stratospheric ozone levels have begun to decrease in the
Artic as well, though scientists believe that a “hole” like that at the South
Pole is not likely to develop. Nonetheless, there have been short period with
significant ozone loss in the
Increasing ultraviolet radiation at the surface results in effects on human health, natural ecosystems, and crops. The human effects of increasing ultraviolet radiation include increase in skin cancer cases, development of cataracts, and suppression of human immune systems. Effects on natural ecosystems include decrease in photosynthetic productivity and adaptive strategies. Phytoplankton in the oceans, for example, are thought to stay further away from the ocean surface in response to changing ultraviolet light concentrations. The crop productivity of certain crops can be adversely affected by changes in UV concentrations at the surface .
The Montreal Protocol, adopted in 1987, required nations to freeze production levels of CFCs. Additional agreements enacted since 1987 accelerated the CFC phase out timetable to December 31, 1995. Atmospheric concentrations of chlorofluorocarbons peaked in 1994 and began to decrease in 1995, marking the first time that a atmospheric concentrations of chlorine began to decrease. Chlorine concentrations in July 2002, were about 5% less than the 1994 peak. However, the amount of atmospheric bromine continues to increase, albeit at a slower rate.
Many scientists believe that the stratospheric ozone layer will be somewhat “mended” by the year 2050, though uncertainty remains. In the mean time, it is difficult to predict, with any reasonable accuracy, the amount of ozone depletion that might continue to take place, how much additional UVB will reach the earth’s surface in the next fifty years, and the potential impacts of this increased radiation on terrestrial and aquatic ecosystems as well as on human health. |
Period poverty is the lack of access to sanitary products, menstrual hygiene education, toilets, handwashing facilities, and or waste management. The term also refers to the increased economic vulnerability that women and girls face due to the financial burden posed by menstrual supplies. In least-developed and low-income countries, access to hygienic products such as pads, tampons, or cups is limited. This means that girls will often resort to using proxy materials such as mud, leaves, or animal skins to try to absorb the menstrual flow. As a result, such women are at a higher risk of developing certain urogenital infections, like yeast infections, vaginosis, or urinary tract infections. This becomes an issue because while the majority of women are of reproductive age, the majority of these women and girls are unable to practice proper hygiene practices. Consequently, women and girls around the world, especially in developing countries, face numerous challenges in managing their menstruation. Furthermore, some/many women are forced to approach this normal bodily function with silence due to stigma, as some communities consider menstruation to be taboo.
What causes period poverty?
One cause is that pads and other supplies may be unavailable or unaffordable. This means that women are often forced to choose between purchasing sanitary pads and different basic needs, or they may live in areas where there is no access to hygiene products at all. More importantly, young girls may lack access to toilet facilities with clean water to clean themselves while on their periods. In addition, discriminatory cultural norms make it challenging to maintain good menstrual hygiene as women often have to hide, or the community may not put enough effort into establishing hygiene facilities or practices around them. Also, some women and girls lack the necessary education and information about menstruation and good hygiene practices because topics around menstruation and proper hygiene practices are rarely discussed in families or schools.
What is more, other girls may experience menstruation with little or no knowledge of what is happening. This makes it harder for women to adopt sanitary practices because most remain unaware of recommended hygiene practices. In many communities, menstruating girls and women are still banned from kitchens, crop fields, or places of worship. There is also the issue of forced secrecy in communities where girls are exposed to ‘menstrual etiquette.’ This etiquette encourages the careful management of blood flow and discomfort and the importance of keeping menstruation hidden from boys and men.
A Human Rights Issue.
It is important to consider gender inequality, extreme poverty, and harmful traditions as the source of menstrual hygiene deprivation and stigma. This often leads to exclusion from public life, heightened vulnerability, and creates barriers to opportunities such as employment, sanitation, and health.
Some of the human rights that are undermined by period poverty include,
- The right to human dignity– When women and girls cannot access safe bathing facilities and safe and effective means of managing their menstrual hygiene, they are not able to manage their menstruation with dignity. Menstruation-related teasing, exclusion, and shame also undermine the right to human dignity.
- The right to an adequate standard of health and well-being– Women and girls may experience negative health consequences when they lack the supplies and facilities to manage their menstrual health. Menstruation stigma can also prevent women and girls from seeking treatment for menstruation-related disorders or pain, adversely affecting their health and well-being.
- The right to education – Lack of a safe place or ability to manage menstrual hygiene as well as lack of medication to treat menstruation-related pain can all contribute to higher rates of school absenteeism and poor educational outcomes. Some studies have confirmed that when girls are unable to manage menstruation in school properly, their academics and performance suffer.
- The right to work – Poor access to safe means of managing menstrual hygiene and lack of medication to treat menstruation-related disorders or pain also limit job opportunities for women and girls. They may refrain from taking specific jobs, or they may be forced to forgo working hours and wages. Menstruation-related needs, such as bathroom breaks, may be penalized, leading to unequal working conditions. And women and girls may face workplace discrimination related to menstruation taboos.
- The right to non-discrimination and gender equality – Stigmas and norms related to menstruation can reinforce discriminatory practices. Menstruation-related barriers to school, work, health services, and public activities also perpetuate gender inequalities.
What is being done?
In spite of the issues presented, it is essential to acknowledge that a lot is being done around the world to help eradicate period poverty.
For example, UNFPA (United Nations Population Fund), has various approaches to promoting and improving menstrual health around the world. Some of them include,
- UNFPA reaches women and girls directly with menstrual supplies and safe sanitation facilities. In humanitarian emergencies, UNFPA distributes dignity kits, which contain disposable and reusable menstrual pads, underwear, soap, and related items. (In 2017, 484,000 dignity kits were distributed in 18 countries.)
- The UN organization also promotes menstrual health information and skills building. For example, some UNFPA programs teach girls to make reusable sanitary napkins. Others raise awareness about menstrual cups.
- Furthermore, the organization aims to improve education and information about menstruation as human rights concerns. This is done through its youth programs and comprehensive sexuality education efforts, such as the Y-Peer program.
- UNFPA also procures reproductive health commodities that can be useful for treating menstruation-related disorders. For instance, hormonal contraceptive methods can be used to treat symptoms of endometriosis and reduce excessive menstrual bleeding.
- Similarly, UNFPA is helping to gather data and evidence about menstrual health and its connection to global development. For instance, UNFPA supported surveys provide critical insight into girls’ and women’s knowledge about their menstrual cycles, health, and access to sanitation facilities. A recent UNFPA publication offers a critical overview of the menstrual health needs of women and girls in the Eastern and Southern Africa region.
While there exists a lot of support to help end period poverty, there is still a lot that can be done to improve access to sanitary products, menstrual hygiene education, toilets, handwashing facilities, and, or waste management. Human Rights Watch and WASH United recommend that groups which provide services to women, evaluate their programs to determine whether a woman or girl has,
- Adequate, acceptable, and affordable menstrual management materials;
- Access to appropriate facilities, sanitation, infrastructure, and supplies to enable women and girls to change and dispose of menstrual materials; and
- Knowledge of the process of menstruation and options available for menstrual hygiene management.
Practitioners engaged in programming or advocacy related to menstrual management should also,
- Have an awareness of stigma and harmful practices related to menstruation in the specific cultural context where they are working.
- Support efforts to change harmful cultural norms and practices that stigmatize menstruation and menstruating women and girls;
- Address discrimination that affects the ability to deal with menstruation, including for women and girls with disabilities
- Be aware of and incorporate human rights principles in their programming and advocacy, including the right to participate in decision-making and to get information.
Moreover, women and girls must have access to water and sanitation. This will allow the establishment of private areas to change sanitary cloths or pads, clean water for washing their hands and used fabrics, and facilities for safely disposing of used materials or drying them if reusable. It is also imperative that both men and women have a greater awareness of menstrual hygiene. This means that training and learning courses should be made available for women and young to teach them the importance of menstrual hygiene and the proper practices. Likewise, educating boys on the challenges and struggles girls face could help reduce stigma and help them become more understanding and supportive husbands and fathers. Less work has been done in this area, but the benefits of educating boys about adolescence for both themselves and female students are increasingly being recognized.
It is essential to acknowledge that there is still limited evidence to understand women’s use of sanitation and menstrual management facilities. Therefore, there is a need for individuals to pay special attention to the needs of women and girls all over the world. |
Dysphagia is the medical term for trouble swallowing food or liquid. The condition is more common in the elderly; however, anybody at any age may develop it as a symptom of a wide range of neurological or physical conditions. Some common causes of dysphagia include dementia, throat cancer, or certain medical treatments such as radiation therapy. When a person with this condition is unable to swallow food and liquid properly, they are at a higher risk of complications and other problems, including:
Dehydration can be an issue for people with dysphagia who find it difficult to swallow water and other liquids. When swallowing makes it difficult to get the fluid that we need in our body, symptoms like persistent headaches; tiredness and fatigue; dry skin, hair and nails; and brain fog may occur. In more severe cases, dehydration can cause more serious issues like kidney and urinary tract infections, which is why it is important for people with dysphagia to get the fluid that they need. Adding a product like SimplyThick thickener gel to liquids is an ideal option to prevent dehydration.
When somebody is struggling to swallow their food, they might be unable to get all the nutrients that they need from their diet, which can lead to nutritional deficiencies and cause other health problems. Because of this it is important for people with dysphagia to have a diet plan that allows them to get a diet that is as balanced and nutritious as possible while taking swallowing difficulties into account. For example, this could include adding nutrient powders to smoothies and soups, blending and pureeing fruits and vegetables, and drinking thick protein shakes that are easier to swallow.
Aspiration is a common symptom of dysphagia. This happens when food or liquid ‘goes down the wrong way’ and is inhaled rather than swallowed. When somebody with dysphagia is aspirating particles of food a lot due to swallowing difficulties, this can lead to an infection in the lungs known as aspiration pneumonia.
Tiredness and fatigue are not uncommon in people with dysphagia, since they may not be getting the energy that they need from their food. Tiredness and fatigue can also be common symptoms of other problems that are associated with dysphagia such as dehydration or a nutritional deficiency.
Depression and Low Mood
Somebody with dysphagia may suffer with feelings of depression and low mood for various reasons. Dysphagia can sometimes be a condition that leads to isolation as the person may not feel like they are able to do the things that they normally enjoy such as socializing with friends over food and drinks or entertaining friends and family at home. Symptoms such as regurgitating food or excessive drooling can also be quite embarrassing for the patient which may lead to them spending an increased amount of time at home. Ultimately this can contribute to stronger feelings of depression, low mood, and anxiety.
Dysphagia can occur for many reasons. Any persistent trouble with swallowing should be treated quickly to avoid further health complications. |
- All terms
- Digital excellence
- Inclusive design
- Product strategy
- Service design
- Web accessibility
What is eLearning?
Also known as online education, online learning, online courses, and e-learning
eLearning is the process of delivering educational content or curriculum on a given topic to people through an online website, course, or Learning Management System (LMS).
eLearning is well suited to students of all ages, particularly in a hybrid learning environment that includes both live and self-directed instruction and online and in-person engagement with educators, trainers, and fellow learning participants.
Online courses benefit students and employees by supporting improved learning outcomes and career growth and upskilling.
Online education involves developing course content, sharing it with learners, and potentially, evaluating their learning through assessments like tests or projects.
eLearning tools might include video conferencing, breakout rooms, discussion boards, slide decks, and interactive activities, all aimed at helping a range of learners establish a strong understanding of the topics being taught.
Research shows that students have a range of different learning styles and preferences, where differentiated learning and inclusive and accessible practices can significantly improve learning outcomes.
Bloom’s taxonomy is a hierarchical model of classifying educational objectives, building from remembering to understanding, applying to analyzing, evaluating and, finally, creating.
- Remembering: recall facts and basic concepts
- Understanding: explain ideas or concepts
- Applying: use information situationally
- Analyzing: draw connections among ideas
- Evaluating: make and defend judgments
- Creating: product new or original work on the topic
Instructional design practices help accelerate higher-order thinking for course participants to move beyond remembering and understanding toward application, analysis, and more.
Communities of practice
Communities of practice are an effective way to enhance online learning and self-directed learning by allowing learners to explore content and practice new skills together. Such communities have been shown to increase student satisfaction and retention (Leong, 2011) in e-learning programs.
Reference: Leong, P. (2011). Role of Social presence and Cognitive Absorption in Online Learning Environments. *Distance Education* , *32* (1), 5-28. doi: 10.1080/01587919.2011.565495
Related:Self-regulated learning LMS Universal Design for Learning (UDL)
Differentiated learning is a philosophy of teaching that involves the idea that a teacher should avoid only teaching one way to the entire class. Instead, each student needs to be engaged in different ways based on their preferences, strengths, and weaknesses.
Online courses are particularly well suited to support differentiated learning by offering multiple modes of learning, self-directed learning, and more accessible and varied access to learning materials.
Related:Universal Design for Learning (UDL) Universal design Equity
Individual Education Plan (IEP)
An individual education plan is a document that schools use to establish and define accommodations for students with disabilities.
Related:a11y Equity AODA ADA
Instructional Design is the planning, development, and creation of educational materials and curricula.
Using principles of Bloom’s taxonomy, differentiated learning, universal design for learning (UDL), and other methods, an Instructional Designer, Curriculum Designer, and/or Educator—along with subject matter expert(s) (SMEs)—develop lesson plans, assessments, materials, and collaborative activities that, together, help improve learning outcomes for course participants.
Related:Bloom’s taxonomy Self-regulated learning Differentiated learning Universal Design for Learning (UDL)
Learning Management System (LMS)
Also known as Learning Management Software
A Learning Management System, or LMS, is a platform or software used to deliver educational content to students. Using an LMS allows course content to be displayed and can enable activities and exams, interactive discussions, course management and tracking learner progress.
There are many different LMS platforms, including:
- Google Classroom
Say Yeah’s inclusive and accessible online education solutions work within and without of leading learning management systems. Let’s connect to explore how we can support you in improving learning outcomes for your organization.
Related:Differentiated learning Universal Design for Learning (UDL) Digital transformation
Self-regulated learning involves students building and using strategies such as metacognition, time management, and critical thinking in their learning in a classroom. These strategies are strongly correlated (Wong, 2018) to more success in an online education environment.
Reference: Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G., & Paas, F. (2018). Supporting Self-Regulated Learning in Online Learning Environments and MOOCs: A Systematic Review. International Journal Of Human-Computer Interaction, 35 (4-5), 356-373. doi:10.1080/10447318.2018.1543084
Related:LMS Differentiated learning Universal Design for Learning (UDL)
Universal Design for Learning (UDL)
Universal Design for Learning is a methodology for structuring lesson planning and classroom management around equity and accessibility to that all students can feel included and supported by classroom activities, assignments, and the overall process of learning. UDL’s components, Engagement, Representation, and Action & Expression are based off of the neuroscience of learning.
Related:Inclusive design Universal design Equity AODA
Web standards form the foundation of each and every website. These standards are set by W3C, enacted by web browser makers, and used by web developers so that the content and code of a website can be viewable and interactive across devices, operating systems, and web browsers.
Web standards are the underlying foundation for ensuring the access, performance, and security of your website, app, and online courses. Without adherence to web standards, accessibility and performance suffers, which reduces engagement with your content. |
They breed in the summer in Alaska, all across Canada, south to the northeastern U.S. and Rockies into central Mexico and Guatemala. They spend the winter in southern Canada.
They are found in coniferous forests, thickets and shrubby fields.
They have mottled brown streaks all over with yellow patches at the base of tail and on their wings. They have a thin, sharp beak and a notched tail.
They visit feeders in the winter.
They eat small seeds, tree buds, insects, and spiders.
They make a shallow nest of twigs, grass, leaves, bark strips, and lichens. They line it with fur, feathers, grass, and moss. The female lays 3 - 4 pale greenish blue eggs with brown speckles.
Species: C. pinus
When you research information you must cite the reference. Citing for websites is different from citing from books, magazines and periodicals. The style of citing shown here is from the MLA Style Citations (Modern Language Association).
When citing a WEBSITE the general format is as follows.
Author Last Name, First Name(s). "Title: Subtitle of Part of Web Page, if appropriate." Title: Subtitle: Section of Page if appropriate. Sponsoring/Publishing Agency, If Given. Additional significant descriptive information. Date of Electronic Publication or other Date, such as Last Updated. Day Month Year of access < URL >.
Amsel, Sheri. "Pine Siskin" Exploring Nature Educational Resource ©2005-2023. March 28, 2023
< http://www.exploringnature.org/db/view/140 > |
Most of the manufactured objects we use undergo several procedures before getting the final product. Such objects include tools, toys, ornaments, and footwear. The first step in manufacturing these objects is to design, followed by creating a model. If the model meets the desired features, a final object is manufactured.
TPU is an acronym for thermoplastic polyurethane. A TPU filament is a synthetic polymer that exhibits rubber plastic properties. It is elastic like rubber and hard like plastic. These properties make it ideal for printing models of objects like toys and shoes in 3D printing.
There are more facts to learn about TPU. This post highlights various properties of TPU, its uses, and how it compares with polylactic acid (PLA).
What is TPU Filament Used for?
Based on its hybrid properties, TPU has a wide array of applications. Like plastic, it is not reactive to common chemicals, air, and water. Its hardness makes it resistant to abrasion caused by scrubbing and can withstand high temperatures. And like rubber, it is elastic and, therefore, can be molded to different designs.
TPU is mainly used for the creation of 3D prints. The common prints include footwear, devices used by medical practitioners, casting wheels, and other tools. Its resistance to water and heat also makes it suitable for making cases of electronic devices. Mobile phone cases, tablet covers, electronic calculators, and laptop casings are made of TPU.
What is the Difference Between TPU and PLA?
Both TPU and PLA are polymers used in 3D printing and the manufacture of different objects. However, there are differences in their composition, structure, and properties. Some of the differences are listed in the table below.
Differences between TPU and PLA
|Made from plastic and rubber (polymer)||It is made from starch extracted from corn, sugarcane, and potatoes.|
|It is resistant to common substances like water, oil, and acids and non-reactive to air.||It is hygroscopic. It absorbs water and decomposes quickly.|
|It causes environmental pollution since it is non-biodegradable||It does not cause pollution because it is biodegradable|
|It is long-lasting and does not degenerate in an adverse environment||It degenerates quickly when exposed to adverse environmental conditions|
|Can withstand high temperatures||Cannot withstand high temperatures|
|It is ideal for 3D printing at high temperatures since it is heat resistant and withstands up to 90⁰C.||It is ideal for making models from 3D bioprinters.|
|It can be molded to different shapes and regain the original form upon manipulation.||Once molded to a shape, the original form cannot be regained.
|It requires skilled personnel to work with||It is easier to work with and is ideal for beginners|
Can All 3D Printers Use TPU?
TPU works best in printers that use a heating bed because it can withstand high temperatures. Some printers, however, use an extruder mechanism without a heating bed. At such low temperatures, TPU is not ideal because it cannot be modified to desired design easily.
A 3D printer can use TPU if:
- It has a heating bed.
- It prints objects that require elongation and modification.
- The prints are resistant to scratching.
- If the models are to be converted to different shapes.
TPU is not ideal for printers that use far-end extrusion because of the low temperatures at which they operate.
What Can You Make With TPU?
TPU filament is ideal for making a lot of prints and objects. Here are some of the objects.
- Automotive instruments such as caster wheels because it is resistant to abrasion
- Sporting accessories like balls because it is elastic, water-resistant, and long-lasting.
- Medical instruments because it is durable, unreactive to most chemicals, and heat resistant.
- Film sheets because it is water and heat resistant
- Cases for electronic devices such as mobile phones because it is transparent and water-resistant.
- Footwear such as sports shoes and knee guards because it is elastic and abrasion-resistant.
- Packaging containers because it is transparent and heat resistant.
Is PETG the Same as TPU?
Polyethylene terephthalate glycol (PTEG) is a synthetic polymer. It is a member of the polyester plastics family and is used in making containers, water bottles, electric insulators, and packaging containers. It is made from PET (polyethylene terephthalate) and glycol. PETG is hard, transparent, thermal resistant, and ductile.
The hardness and transparency of PETG make it good for making packages, especially for food products. It is also widely used in 3D printing because it is ductile with great thermal resistance.
PETG and TPU share some common characteristics, but they also exhibit some differences.
Similarities Between PETG and TPU
- Both PETG and TPU are synthetic polymers and belong to the plastic family
- Both are transparent and water-resistant.
- Both are used in 3D printing, where printers work under high temperatures
- Both are non-biodegradable and long-lasting.
- Both can be molded and remolded to different shapes.
Differences Between PETG and TPU
- PETG is ductile while TPU is elastic
- PETG is polyester, while TPU is a polyurethane
- PETG is brittle while TPU is non-brittle
- PETG is ideal for making electronic insulators, while TPU is commonly used in making common objects
PETG and TPU are different substances. They have similarities in characteristics and uses but also exhibit some differences.
Is TPU Filament Toxic?
Under normal conditions, TPU is non-toxic. It has great thermal stability and does not wear out easily. Being unreactive, it is safe for handling food substances and medicine.
However, TPU filament can become toxic under extreme conditions such as high temperatures and frequent contact with water.
During extrusion, TPU withstands temperatures up to 250⁰C. Beyond this temperature, the filament is affected by heat and becomes toxic. Also, too much contact with water makes TPU wear out and may be toxic.
TPU filament is one of the most widely used polymers in the 3D industry because of its thermal resistance. Its inert nature makes it applicable in making a wide range of products. If handled well, it lasts for a long and is economical. Its only drawback is environmental pollution because it doesn’t decompose. |
- Maintain good hygiene practices such as frequent handwashing and disinfecting commonly touched surfaces.
- Practice social distancing by maintaining a distance of at least 6 feet from people outside of your household.
- Encourage your child to wear a mask in public, ensuring it fits snugly over their nose and mouth.
- Promote healthy habits such as getting enough sleep, eating a balanced diet, exercising regularly, and staying hydrated.
- Utilize pediatric COVID-19 testing if your child exhibits symptoms or has been in contact with someone who tested positive.
As a parent, protecting your child from COVID-19 is a top priority. The ongoing pandemic has brought new challenges and concerns for parents worldwide. However, with proper precautions and awareness, you can take steps to keep your child safe. This guide will provide five essential tips to protect your child from COVID-19.
1. Maintain Good Hygiene Practices
Maintaining good hygiene practices is one of the most effective ways to protect your child from COVID-19. This is because good hygiene practices reduce the spread of germs and bacteria that can cause illnesses.
Here are some good hygiene practices to encourage:
Wash Your Hands Frequently
Encourage your child to wash their hands often with soap and water for at least 20 seconds, especially after using the bathroom, before eating, and after touching potentially contaminated surfaces such as doorknobs and light switches. Teach them to hum the “Happy Birthday” song twice while washing their hands to ensure they are washing for the recommended amount of time.
Avoid Touching Face
Remind your child to avoid touching their face, particularly around their eyes, nose, and mouth. This can reduce the risk of virus transmission by preventing it from entering through mucous membranes or an open wound or sore. Encourage them to play games like counting how many times they touch their face daily to help them become aware of this habit.
Cover Coughs and Sneezes
Teach your child to cover their mouth and nose with their elbow or a tissue when they cough or sneeze, and dispose of used tissues properly. Remind them that coughing or sneezing into their hands can spread germs more easily. Ask them to practice “cough etiquette” by maintaining distance from others while coughing or sneezing and covering their mouth.
Clean Surfaces Regularly
Regularly clean and disinfect commonly touched surfaces in your home, such as doorknobs, light switches, remote controls, countertops, and faucets. Use EPA-approved disinfectants that are effective against the coronavirus. Remind your child to wash their hands after coming into contact with these surfaces.
2. Practice Social Distancing
Social distancing is another crucial measure to protect your child from COVID-19. Teach your child to maintain a distance of at least 6 feet from people outside their household, especially in crowded places or when physical distancing is challenging. Avoid large gatherings and crowded areas, such as parties, playgrounds, or indoor events, where the risk of transmission is higher.
Encourage your child to avoid close physical contact with others, including handshakes, hugs, and high-fives. Instead, promote alternative ways of greeting, such as waving or nodding. Explain to your child the importance of social distancing in preventing the spread of the virus and protecting themselves and others.
3. Wear Masks
Wearing masks can be an effective tool in preventing the spread of COVID-19. Encourage your child to wear a mask in public, especially when maintaining social distancing may be challenging, such as in a crowded place or when interacting with people outside of their household. Ensure the mask fits snugly over your child’s nose, mouth, and chin without gaps.
Choose masks made of multiple layers of breathable material and appropriate for your child’s age and size. Avoid masks with valves, as they may not provide adequate protection. Teach your child how to wear and remove a mask properly, and emphasize the importance of not touching their face or adjusting their mask while wearing it.
4. Promote Healthy Habits
Promoting healthy habits can boost your child’s immune system and help protect them from COVID-19. Encourage your child to maintain a healthy lifestyle by eating a balanced diet, exercising regularly, and getting enough sleep. Adequate sleep is especially important for a strong immune system. Limit your child’s exposure to sugary foods and beverages, as excessive sugar intake can weaken the immune system.
Teach your child the importance of staying hydrated by drinking plenty of water throughout the day. Ensure your child has a well-balanced diet that includes a variety of fruits, vegetables, whole grains, lean proteins, and healthy fats. Also, encourage your child to engage in regular physical activity, such as playing outdoors, cycling, or practicing yoga, while following appropriate safety measures.
5. Utilize Pediatric COVID-19 Testing
If your child has been in contact with someone who tested positive for the virus or exhibits symptoms, it is important to utilize reliable pediatric COVID-19 testing. A reliable pediatric COVID-19 testing center can help you determine whether your child has contracted the virus and assist with receiving the necessary treatment.
Pediatric COVID-19 testing can also reassure you as a parent, as it can identify potential cases before the onset of symptoms and help prevent the further spread of the virus. Additionally, early detection through pediatric COVID-19 testing may reduce the severity of symptoms in your child who has contracted the virus.
Protecting your child from COVID-19 requires a combination of good hygiene practices, social distancing, wearing masks, promoting healthy habits, and utilizing pediatric COVID-19 testing. Following these five essential tips can help keep your child safe and reduce the risk of virus transmission. Remember to stay informed about the latest guidelines and recommendations from public health officials and healthcare providers and adjust your behavior accordingly. |
Potential exposure to rabies is a medical urgency not an emergency, but decisions must not be delayed. Any wounds should be immediately washed with soap and water. If available, a virucidal agent such as diluted iodine solution should be used to irrigate the wounds. Medical attention from a health care professional should be sought for any trauma due to an animal attack before considering the need for rabies vaccination.
Report an Exposure
If you believe you may have been exposed to the rabies virus call the Maricopa County Department of Public Health at 602-747-7500 (24 hours a day).
How Rabies is Exposed
The rabies virus becomes noninfectious when it dries out and when it is exposed to sunlight. Different environmental conditions affect the rate at which the virus becomes inactive, but in general, if the material containing the virus is dry, the virus can be considered noninfectious.
People usually get rabies from the bite of a rabid animal. It is also possible that people may get rabies if infectious material from a rabid animal gets directly into their eyes, nose, mouth or a wound.
- Bite or puncture wound.
- Saliva, brain or nervous tissue that gets directly into eyes, nose, mouth, or a wound.
Not an Exposure
- Petting or handling an animal.
- Contact with blood.
- Contact with urine/feces.
Any contact with bats requires special consideration as the rabies virus can be transmitted from a minor or unrecognized bat bite. All instances of human contact with bats should be assessed by the local health department.
For more information on avoiding contact with wild animals, visit the Living With Wildlife page at the Arizona Game and Fish Department website. |
Your teacher is pointing out that most physical systems obey a time-reversal symmetry. That is, take a process (such as light or sound emission), run the clock backwards, and indeed you'll find that the inverse process happens. So if you set up the system to run in reverse, in this narrow definition the system can act as both an emitter and absorber. Now, there are some situations where time-reversal symmetry is broken, such as with many magnetic systems. Look up optical isolators, which use magnetism to allow light to propagate only one way through them (not backwards). Put an optical isolator in front of your LED, and the total system will emit light, but not absorb it (the light could be reflected instead).
As Samuel Weir mentioned in the comments, there is also a complication when you include thermodynamic processes. These include resistors heating up when you run a current through (perhaps causing an incandescent bulb to emit). While each microscopic interaction therein might technically observe time-reversal symmetry, you would never be able to set up the experiment to "run time in reverse". This is because entropy tends to increase with time. If your emitter device starts with a low-entropy state (e.g. a cold resistor carrying a client) and evolves to a high-entropy state (e.g. a hot resistor with a current), you generally won't be able to undo this with an inverse process. This is why you can't take any old resistor, cool it down, and get a voltage drop.
Another example of entropy getting in the way of perfect reciprocity is the LED. An LED will emit light with a color corresponding to its band gap. However, it can absorb light with colors equal to and bluer (that is higher-energy photons) than the band gap. So a red LED can absorb blue light but it won't emit blue light. That's not to say that it could never emit blue light (the absorption is time-reversal symmetric after all), just that it is exceedingly unlikely to happen because thermodynamically the electrons don't like to stay in the blue states for very long, and they will quickly move down to the red states, losing the excess energy to heat. |
By Karin Sternberg Photographs by Jenny Cullinan and Karin Sternberg (all photographs and videos are protected by copyright)
When people hear the word honeybees, they usually think of bees in boxes and as the source of honey. Little does one know, that there is far more to honeybees than hives and honey. Here in the winter rainfall area of South Africa, the majority of honeybees occur in the wild where nesting sites are selected mainly under rocks or in rock crevices with the physical environment largely determining nesting behaviour. The dominant vegetation is fynbos (heathland) and the Cape honeybee (Apis mellifera capensis) is endemic to this region. The wild honeybees use a prolific amount of propolis to insulate the nest from temperature and humidity fluctuations, which also serves as an effective fire barrier (Tribe et al. 2017). The fynbos vegetation is adapted to fire which is essential for its perpetuation and preservation. An abundance of plant resins and waxes occur within these fynbos plants, largely as chemical defences against herbivory, which offers a diverse and unique source of resins for creating propolis. The propolis wall is therefore also an integral part of the bees‘ immunity with its alchemy of organic compounds offering important antibacterial and anti-fungal properties to the colony. Not only has the Cape honeybee adapted to living in this fire-prone region, but a number of animal species have adapted to living in association with the wild Cape honeybee, such as the Ten-spotted ground beetle, Anthia (Termophilum) decemguttata.
Bees are the most important pollinators of flowering plants worldwide and are ecological keystone species. By co-evolving with angiosperms, bees have contributed decisively to the present phytodiversity and the structure of the terrestrial vegetation and ecosystems (Kuhlmann 2010). The Cape fynbos region is the smallest of the six floral kingdoms in the world, but the most diverse in terms of species’ richness. The existence of a small population of the Ten-spotted ground beetle is partially dependant, too, on the wild honeybee, as observed at a wild honeybee nest in the Table Mountain National Park, Cape of Good Hope Section. Once one starts observing the honeybee in its natural habitat, there is a fascinating array of interconnections waiting to be discovered.
All year round we have observed this particular ground beetle on our walks across the Cape Peninsula while tracking honeybees in flight and searching for wild colonies. But, it was only while monitoring this nest that we realised the dependence of the beetle on the honeybee as a source of food. The nest was recently discovered and is at the highest elevation at 190m above sea level of the 93 nesting sites found to date in the Cape Point Section. The nesting site has a south west entrance orientation, with a protected landing area and the colony is deeply recessed under rock with a long and narrow propolis wall, measuring 1100mm (l) by 100mm (h). The nest entrance is surrounded by Metalasia, Syncarpha vestita, Hermas villosa, Restio patens and Diastella divaricata fynbos plants.
The beetle is elongate, roughly 50mm in total length, dull black in colour, has prominent brown eyes, the head is large and flattened and the jaw juts forward to facilitate the capture of prey. It has a reddish-brown heart-shaped thorax, each side marked with a small white spot. The antennae are thin and long and equipped with keen senses of touch and smell. The legs are strong and well suited for running (Scholtz & Holm 1985). The elytra, or wing cases, are sculptured with a number of longitudinal grooves. Each elytron has five spots of white down (The Naturalist’s Library, Vol. 2). They cannot fly as their wing cases (elytra) are fused, forming a strong covering for the abdomen; the membranous wings beneath the wing cases have disappeared (Skaife 1979). The colouration, spots and intensity of the white spots can vary, as we noted when we saw several of these beetles together at this nest location. Being black, they absorb heat which enables them to become active earlier in cold conditions.
At this particular location we watched as a single beetle warmed up under a rock overhang three metres from the ridge of rocks within which the honeybee colony is located. Between the beetle and the colony were low fynbos shrubs and exposed sandy patches; a controlled burn having taken place in April 2015 in this area. Its abdomen faced into the sun, its head slightly hidden from view under rock. At approximately 10:30am the beetle started moving towards the nest under the protective canopy of fynbos and restiads. At this time we noticed a convergence of at least two other beetles of the same species moving towards the nest. Directly at the nest entrance and in the path of the exiting and returning foragers, slightly hidden from our view by the tufted reed Restio patens, two individuals started mating. Guard bees continually monitored the two beetles, sometimes flying in close and almost buzzing the beetles, at other times flying into the beetles. On one occasion the male tried to kick out at the guard bee. Otherwise the beetles did not seem to be disturbed by the presence of the guard bees. The mating process was a long affair of 45min and we captured on video a foot-tapping display by the female.
After mating was complete, 4 – 6 beetles were spotted in the vicinity of the nest, emerging from different directions. The activity at the nest was heightened, while the sound from the bees changed and became louder. Guard bees started zig-zagging close to the ground through the undergrowth and between the plants and restiads and patrols became more prolific. The beetles started hunting, running up the sandy clearing directly under the flight path of the foragers, sometimes in pairs, and sometimes at least three were close to the nest. One of the beetles ran up the rock face, along and down, only to drop into the nest entrance from the rock overhang above. Another beetle ran up a cluster of a grass-like plant and waited for an opportunity to hunt. Several returning and emerging bees became caught in the curly restiads protruding into the nest entrance. In addition, the bees of this colony were unusually clumsy, often landing upside down or falling sideways, a phenomena only otherwise seen at one other nest. In fact, this nest is the closest in proximity to the nest we had aptly named “Clumsy Nest” after this extraordinary behavioural trait. We considered whether these nests were directly related.
These beetles are formidable hunters and fast on foot. They quickly caught and subdued any forager (female worker bee) or drone (male bee) tangled in the restiads. The guard bees immediately chased the beetle predator, probably in response to the distress pheromone discharged by the trapped bee, but the guard bees had little impact on the beetles and their hunting activities. The beetles with their mouthparts adapted for biting and chewing (Skaife 1979) were quickly able to consume the bees under cover of the fynbos. After one beetle carried away a drone in its mandibles, another beetle came towards it, but there was no tussle and the oncoming beetle merely turned away. The beetles appear not to share their prey. On several other occasions we witnessed fighting amongst the beetles with attacks from behind and two males rolling as if in a skirmish.
It did not appear as if the beetles known locally as “Oogpister” used their chemical defence mechanism to squirt formic acid in response to feeling threatened (Scholtz & Holm 1985) by the bees. The local name is derived from the squirting of this foul and irritating liquid into the eyes or mouth of predators such as lizards, toads, birds and various mammals. The chlorine or bleach-like odour is easily perceptible if the beetle feels threatened, causing it to squirt this liquid consisting of Benzoquinine compounds. The aposomatic or warning colouration of red and black is usually a deterrent to such predators.
The heightened bee activity between 12:30 and 13:30 attracted not only the Ten-spotted beetles, but also Black girdled lizards and Southern rock agamas. Two smaller orientation flights took place during this period amidst loud buzzing sounds from the honeybee colony. There were a number of drones present. The beetles often took cover in a protected nook slightly inside the nest recovery area and close to where many of the bees clumsily landed. Particularly the drones would land, walk up and along the back wall and then down and through the nest entrance hole in the propolis wall.
Since documenting this behaviour at ‘Nest 93’, we have since seen it at other nests. By additionally preying on dead bees that have been removed from a nest, these beetles play a vital role in the wider hygiene of the nesting site. When a beetle thought itself overly formidable at ‘Hope Nest’ and ran in under the ball of bees hanging from their comb, a number of guard bees quickly engulfed it and grounded it indefinitely.
The presence of this carabid beetle species is just one example of adaptation to the largely ground-nesting behaviour of the Cape honeybee in the fynbos biome. It highlights the importance of protecting natural habitats to foster species biodiversity; a biological diversity alive with a variety of living organisms and natural processes.
With many thanks to Dr Manfred Uhlig, Museum für Naturkunde Berlin, for his invaluable input.
The authors at work:
Kuhlmann, M. (2010). More than just honey.
Scholtz, C.H. & Holm, E. (1985). Insects of Southern Africa. Butterworths, Durban. 502 pgs.
Skaife, S.H. (1979). African Insect Life. Struik. 279 pgs. |
Students can Download History Chapter 2 Geographical Features & Pre-Histroic India Questions and Answers, Notes, KSEEB Solutions for Class 8 Social Science helps you to revise complete Karnataka State Board Syllabus and score more marks in your examinations.
Karnataka State Syllabus Class 8 Social Science History Chapter 2 Geographical Features & Pre-Historic India
Class 8 Social Science Geographical Features & Pre-Historic India Textual Questions and Answers
Complete the following sentences:
- Geographically, India is a ……..
- Signs of ashes have been found in the caves of ……..
- The tools of the middle Stone Age are called ………..
- Delicate Stone tools
II. Answer the following questions in brief
Describe the geographical features of India briefly.
- India’s geographical features Comprise the Himalayan mountains, the Indo-Gangetic Plain in the North.
- The Deccan plateau and Coastal regions in the South.
- Bolan and Khyber passes North-Western sides.
- Eastern coastline is called the Coromandel Coast, Western Coastline is called Konkan and Malabar Coastline.
What are the valleys through which the attacks on India have been taken place?
The attacks have been mainly from the northwestern side through the valleys Bolan and Khyber pass and also through the Gilan passes of north-eastern side.
What is meant by the ‘Pre-historic’ Age?
The period of history we have.no support or evidence of written documents is called a pre-historic period. The history of this age can be understood through archaeological surveys and the weapons, utensils, and other implements that we have found of that age.
How did animals husbandry and dairying start?
About 12000 years ago because of the rise in temperature on earth led to the development of grasslands and animals. After the keen observation of old stone age man instead of hunting and eating some animals he caught them and nurtured them. This was the way in which animal husbandry and dairy farming began. Man began to domesticate these animals for food and leather in the beginning, then used those animals for agriculture purposes.
The different period of pre-historic has been given various names by archaeologists. What are they?
The archaeologists have given different names to pre-historic periods. They were:
- Old Stone Age: This once again divides as Early Old Stone Age, Middle Old Stone Age, and Late Old Stone Age.
- Middle Stone Age: They used deli-cate stone implements during this age.
- New Stone Age: People of this age sued grinding stones to grind leaves and herbs.
Class 8 Social Science Geographical Features & Pre-Historic India Additional Questions and Answers
I. Multiple choice questions:
India Surrounded by water on three sides and land on the one side is
a. Sub Continent
d. Coastal region
The Indian Coastline is vast and stretches over Kms
The eastern Coastline is called the
a. Coromandel Coast
b. Malabar Coast
c. Konkan Coast
d. Cenara Coast
a. Coromandel Coast
People have learned to weave cloth during
a. New Stone Age
b. Middle Stone Age
c. Old Stone Age
d. Late Stone Age
a. New Stone Age
II. Answer the following questions:
Which are the neighboring Countries of India?
Pakistan, Afghanistan, China, Nepal, Bhutan, Bangladesh & Myanmar
Write the importance of Indo- Gangetic plains.
- Extremely fertile
- Indus valley and Vedic period
- Flourished here
- Many dynasties were established
“The Indian Coastline had the main role in Ancient times” how?
- Many ports attracted the Romans from time immemorial
- Foreign trade was carried on in those days only sea routes
- Rise of powerful kingdoms Ex. The Pandyas, The Cholas, etc.
Who are Archaeologists?
The Scholars who study the pre-historic period are called Archaeologists.
Where did the Pre-Historic man live?
- They lived in the banks of rivers and lakes
- The relics of the hunting and food gathering humans are available in Bimbetka, Hunasagi, and Kurnool.
Which were the main crops in the Pre-historic Age?
The grains and cereals like rice, Wheat and Barely
Write the features of the ‘Middle Stone Age’?
- The period from 12000 years to around 10000 years
- The tools are very small and delicate stone tools
- They used those tools as axes and saws
- Old tools continued to be used
Write a note on knowledge of fire
- They reveal the knowledge and the use of fire by the people of the stone age
- Probably fire was used for the various purpose to cook food, for light, etc. |
Classroom Talk for Social Change
Critical Conversations in English Language Arts
Melissa Schieble Amy Vetter Kahdeidra Monét Martin Rebecca Rogers
Teachers College Press
Learn how to foster critical conversations in English language arts classrooms. This guide encourages teachers to engage students in noticing and discussing harmful discourses about race, gender, and other identities. The authors take readers through a framework that includes knowledge about power, a critical learner stance, critical pedagogies, critical talk moves, and vulnerability. The text features in-depth classroom examples from six secondary English language arts classrooms. Each chapter offers specific ways in which teachers can begin and sustain critical conversations with their students, including the creation of teacher inquiry groups that use transcript analysis as a learning tool.
- Strategies that educators can use to facilitate conversations about critical issues.
- In-depth classroom examples of teachers doing this work with their students.
- Questions, activities, and resources that foster self-reflection.
- Tools for engaging in transcript analysis of classroom conversations.
- Suggestions for developing inquiry groups focused on critical conversations.
Melissa Schieble is an associate professor of English education at Hunter College of the City University of New York. Amy Vetter is a professor in English education in the School of Education at the University of North Carolina Greensboro. Kahdeidra Monét Martin is a presidential research fellow and doctoral candidate in Urban Education at the Graduate Center of the City University of New York. |
To the unaided eye the famous bright star Antares shines with a strong red tint in the heart of the constellation Scorpius. It is a huge and comparatively cool red supergiant in the late stages of its life, on the way to becoming a supernova. A team of astronomers, led by Keiichi Ohnaka, of the Universidad Catolica del Norte in Chile, used ESO's Very Large Telescope Interferometer (VLTI) at the Paranal Observatory in Chile to map Antares' surface and to measure the motions of the surface material. This is the best image of the surface and atmosphere of any star other than the Sun.
For the average observer, the Solar Eclipse on Monday, August 21, 2017, will last about 2 minutes and 40 seconds of totality. The National Solar Observatory (NSO), in a unique experiment, plans to create 90 minutes of continuous totality using a chain of 68 telescopes strategically placed across the country. The Citizen CATE (Continental America Telescopic Eclipse) Experiment aims to capture images of the inner solar corona using a network of telescopes operated by volunteer citizen scientists, high school groups, and universities. The goal of CATE is to produce a scientifically unique data set -- A series of high resolution, rapid cadence white light images of the inner corona for 90 straight minutes.
Total solar eclipses are unique opportunities for scientists to study the hot atmosphere above the Sun's visible surface. The faint light from the Corona is usually overpowered by intense emissions from the Sun itself. During a total eclipse, however, the Moon blocks the glare from the bright solar disk and darkens the sky, allowing the weaker coronal emissions to be observed. A team led by Southwest Research Institute will use airborne telescopes aboard NASA WB-57 research aircraft to study the solar corona and Mercury's surface during next week's total solar eclipse. The August 21 observations will provide the clearest images to date of the Sun's outer atmosphere. In addition, the scientists will attempt to take the first-ever thermal images of surface temperature variations of the planet Mercury.
Imagine planting a single seed and with great precision being able to predict the exact height of the tree that grows from it. Now imagine traveling to the future and snapping photographic proof that you were right. If you think of the seed as the early universe and the tree as the universe the way it looks now, you have an idea of what the Dark Energy Survey (DES) collaboration has just done. DES scientists have just unveiled the most accurate measurement ever made of the present large scale structure of the universe, and have been able to map it back to the first 400,000 years following the Big Bang.
The Sun's core spins nearly four times faster than the Sun's surface according to new findings by an international team of astronomers. Scientists had originally assumed that the Sun was spinning like a merry-go-round with the core rotating at about the same speed as the surface. The researchers studied surface acoustic waves in the Sun's atmosphere, some of which penetrate to the Sun's core, where they interact with gravity waves that have a sloshing motion similar to how water would move in a half-filled tanker truck driving on a curvy mountain road. After the Sun formed, the Solar wind likely slowed the rotation of the outer part of the Sun. It is hoped that eventually, a better understanding of the rotation of the Solar core may give a clue to how the Sun formed.
So, what will you be doing during the Great American Solar Eclipse of 2017? Observing? Photographing? Throwing a Great American Eclipse party in your backyard? In past total solar eclipses, people have spent thousands of dollars on extravagant trips and cruises to remote parts of the world to cross this experience off of their "bucket lists." This year, you will be able to walk outside and see one for yourself from the comfort of your home (weather permitting). On August 21, 2017, most of the nation will only see a partial eclipse, but if you are one of the lucky millions along the path of totality, you will experience about two minutes of mid-day darkness as the black shadow of the Moon races across the nation at about 2000 mph.
NASA's Juno mission completed a close flyby of Jupiter and its Great Red Spot on July 10, 2017, during its sixth science orbit. Just days after celebrating its first anniversary in Jupiter orbit, the Juno spacecraft flew directly over the planet's Great Red Spot, the gas giant's iconic, 10,000 mile wide (16,000 kilometer wide) storm. This was humanity's first up-close view of the gigantic feature -- a storm monitored by astronomers since 1830, and possibly existing for centuries before that.
Quantum teleportation has become a standard operation in quantum optics labs around the world. The technique relies on the strange phenomenon of quantum entanglement. This occurs when two quantum objects, such as photons, form at the same instant in time and the in same location in space, and so share the same existence. In technical terms, they are described by the same wave function. According to a report in MIT Technology Review, researchers in China have teleported a photon from the ground to a satellite orbiting more than 500 kilometers above. It may not yet be the same as Star Trek, but it is a start.
Is there a musical equivalent to the curvature of space-time? Gavin Starks thinks so. Yesterday he presented his findings at the Royal Astronomical Society's National Astronomy Meeting held at the University of Hull in the UK. Starks, who has a background in radio astronomy and electronic music, has been working on "Acoustic Cosmology" for more than 20 years in collaboration with Professor Andy Newsam of Liverpool John Moores University in the UK. Their aim is to test whether mathematical relationships that describe cosmology and quantum mechanics can be applied to a Sonic Universe, or "Soniverse."
Remains of microorganisms at least 3.77 billion years old have been discovered by an international team led by University College London scientists, providing direct evidence of one of the oldest life forms on Earth. Tiny filaments and tubes formed by bacteria that lived on iron were found encased in quartz layers in Quebec, Canada, where some of the oldest sedimentary rocks known on Earth exist. These rocks likely formed part of an iron-rich deep sea hydro-thermal vent system that provided a habitat for Earth's first life forms between 3.77 and 4.30 billion years ago. Earth was formed 4.54 billion years ago, so it appears that life on Earth emerged rather early in its history.
Dating back to the first century AD, scientists, philosophers, and other observers have noted the occasional occurrence of "Bright Nights," when an unexplained glow in the night sky lets observers see distant mountains, read newspapers, or check their watches. Few, if any, people observe Bright Nights anymore due to widespread light pollution, but new findings show that they can be detected by scientists and may still be noticeable in remote areas. The new study suggests that waves in the upper atmosphere converge over specific locations on Earth and amplify naturally occurring airglow -- a faint light in the night sky that often appears green due to the activities of atoms of oxygen in the high atmosphere. Normally, people don't notice airglow, but on Bright Nights it can become visible to the naked eye, producing the unexplained glow detailed in historical observations.
Did our Sun have a twin when it was born 4.5 billion years ago? Almost certainly yes -- though not an identical twin... And so did every other Sun-like star in the Universe, according to a new study. Many stars have gravitationally bound companions, including our nearest neighbor, Alpha Centauri, a triplet system. Astronomers have long sought an explanation -- Are binary and triplet star systems born that way? Did one star capture the other? Do binary stars sometimes split up and become single stars? The new study, based on a radio survey of a giant molecular cloud in the constellation Perseus and a mathematical model that can explain the Perseus observations only if all Sun-like stars are born with a companion, suggest that Yes -- all Sun-like stars in the Universe start life as binaries.
A University of Wisconsin analysis has shown that our galaxy resides in an enormous void -- a region of space containing far fewer galaxies, stars, and planets than expected. This idea that we exist in one of the holes of the Swiss cheese structure of the cosmos helps explain inconsistencies in the measurement of the Hubble Constant, the unit that cosmologists use to describe the rate at which the universe is expanding. No matter what technique one uses, we should get the same value for the expansion rate of the universe, but we don't. The reason is that the void has far more matter outside, which exerts a larger gravitational pull towards the inside "wall" of the void. This affects the Hubble Constant value as measured from a technique that uses supernovae, while it has no effect on the value derived from a technique that uses the Cosmic Microwave Background.
Researchers have uncovered 300,000 year old fossil bones of Homo Sapiens in Jebel Irhoud, Morocco -- a find that represents the oldest reliably dated fossil evidence of our species. The find is 100,000 years older than any other previously discovered Homo Sapiens fossils. Amazingly, the facial shape of the skulls is almost indistinguishable from that of modern humans living today. Previously, the oldest Homo Sapiens fossils were discovered at two sites in Ethiopia, dating 195,000 and 160,000 years old. Consequently, many researchers believed that all humans living today descended from the population that lived in East Africa around 200,000 years ago. But this new find suggests that early Homo Sapiens spread across the entire African continent and long before the out-of-Africa dispersal of Homo Sapiens began, there was dispersal within the African continent.
NASA's Juno mission to Jupiter is the second spacecraft designed under its New Frontiers Program. The first was the New Horizons mission to Pluto, which flew by the small planet in July 2015 after a nine and a half year flight. Early science results from NASA's Juno mission to Jupiter portray the largest planet in our Solar System as a complex, gigantic, turbulent world, with Earth-sized polar cyclones, plunging storm systems that travel deep into the heart of the gas giant, and a mammoth, lumpy magnetic field. With its suite of science instruments, Juno will investigate the possible existence of a solid planetary core, map Jupiter's intense magnetic field, measure the amount of water and ammonia in the deep atmosphere, and observe the planet's auroras.
- Rod Mollise
- New Mexico Skies, Inc
- Anacortes Telescope Web Guy
- Matsumoto Company
- TeleVue Optics
- Astromart Customer Service
- SkyShed Observatories
- DiscMounts, Inc.
- AstroMart LLC
- Anacortes Telescope
- Losmandy Astronomical Products
- Astronomers Paradise
- GetLeadsFast, LLC
View all sponsors |
The symbolic revolt of 18 soldiers in 1922 had a lasting impact on Brazilian politics, and contributed to the downfall of the Old Republic in 1930. The incident started when Govenor of Minas, Artur Bernandes, was proposed as a presidential candidate by the São Paulo-Minas League. In order to lessen his favour among the army, fake letters were leaked to a Rio newspaper, causing serious offense amongst military officers. Despite this dirty trick, of which he was eventually absolved, Bernades went on to win the election. The government then tried to use troops for political purposes in Pernambuco, overriding army policy of avoiding partisan conflicts. The effect of these two incidients was to create a high degree of tension between the government and the army, particularly Leiutenants. These officers were perhaps additionally emboldened by the end of the First World War, and their behaviour can be seen as indicative of a wider trend in middle-class political activism.
The Lieutenants organised a revolt, supposedly timed to occur simultaneously in several bases. However, the Republican government heard rumors that the army was on the verge of revolt, and pre-emptively shut several bases. One army base, Forte de Copacabana, remained in the hands of the revolutionaries. The government told the men surrender or die. After letting the enlisted men leave, the remaining 18 Lieutenants, each carrying a piece of the Brazilian flag, left their stronghold at the Copacabana Fort, and marched towards Catete Palace. They were intent on making their demands for social and political reform heard by the Executive Power, at any cost. Blocking their path were 3000 republican troops. By the end of the day, only two Lieutenants were left standing.
The Old Republic, which was dominated by members of the old coffee growing families, congratulated themselves on having quelled the rebellion. But the legend of the Lieutenants did not fade into history, and when Getulio Vargas finally overthrew the Old Republic in 1930, the ‘revolt of the 18’ was cited as a landmark event. |
Table of Contents :
Top Suggestions Characteristic Dialogue Worksheet :
Characteristic Dialogue Worksheet Often times we are drawn to a specific character in books and stories we read in class or at home help your child tap into this fictional attraction by sharing about their favorite character with a After writing an end to the story children will answer reading comprehension questions that will encourage them to think about character traits plot and dialogue this worksheet was designed for Gapped dialogue 2 give students the text of an episode but with one character s lines deleted we d love to publish them and any worksheets you d like to share on this page.
Characteristic Dialogue Worksheet Punctuation is used in direct speech to separate spoken words or dialogue from the rest of a story the words spoken by a character sit this story building grid worksheet from teachit Empathy is the ability to recognize understand and share the thoughts and feelings of another person animal or fictional character developing empathy is crucial for establishing relationships When crafting your resume and cover letter keep in mind that employers aren t just looking at your technical ability but also value soft skills character traits and qualities that may not be.
Characteristic Dialogue Worksheet Ruhi is displaying characteristics of a an 1 creative thinker 2 convergent thinker 3 rigid thinker 4 egocentric thinker 13 in a situation of less participation of students belonging Leaders can use this worksheet and accompanying questions closely aligned views among employees regarding which cultural characteristics are salient in the company when assessing a culture Nehama sent her responses in the next worksheet creating an ongoing dialogue soon mostly through word of mouth gives the reader insight into this character s state of mind despite her.
She described human factors engineering noting that the fda requires that knowledge about human behavior abilities and limitations and other characteristics of medical the prototype designs and If you write enough plays you learn that dialogue can do anything in adam the character lives on a very high floor of a fairly cheap new dorm building the elevators in buildings.
Characterization Worksheets Ereading Worksheets
Thank You So Much For These I Have Been Using Them In My Beginning 7 8 Grade Theatre Class Trying To Develop Deep Thoughts And Ideas About Characterization Has Not Only Helped Them In Their Acting And Understanding Of What It Takes To Become A Role But These Worksheets Have Also Helped Them In Their Other Classes With Writing Creativity And Imagination
Identifying Characters And Their Dialogue Worksheet
Identifying Characters And Their Dialogue Share This Worksheet Who Said It Your Third Graders Are Learning To Read Text And Identify Characters And Their Dialogue With This Worksheet Your Students Will Read A Passage And Then Indicate What Each Character Had To Say
10 Dialogue Worksheets How To Facilitate Roleplaying
10 Dialogue Worksheets So Using Roleplaying And Dialogue Worksheets Can Really Open Up Your Students And Classes In A Whole New Way For Example Thinking Of Sales Techniques Or Multi Use Items Can Really Spark Imagination And Creativity In Students We Ve Also Included Some Of The Classics Like Charades Comic Books And Movie Related Dialogue Activities What Type Of Dialogue Activities Do
English Esl Dialogue Worksheets Most Downloaded 229
Then A Worksheet To Make A Dialogue 26 508 Downloads At The Doctor S Useful Expressions Roleplay By Mobscene123 Useful Expressions To Use When We Go To A Doctor I Divided This Ws Into 5 Parts Different Stages Of A Doctor Visit The Expressions We Use 22 368 Downloads Talk About Ability Permission Necessity By Alexandrina Modals Part 2 Introducing New Aspects Of Modal Verbs To
Writing Dialogue Worksheets
Writing Dialogue Worksheets Related Ela Standard W 4 3 B Answer Keys Here When We Are Writing About A Conversation This Exchange Is Referred To As Dialogue The Conversation Can Between Just Two People Or A Whole Crowd Theater Roots Itself Entirely In Dialogue The Actual Speech In Written Form Is Encased In Quotation Marks These Marks Let The Reader Know That These Were Words Were SpokenDialogue Completion Exercises Worksheets Ket Pet Ielts
More Dialogue Exercises 1 Dialogue Completion Exercise 1 2 2 Esl Conversation Exercise 3 Dialogue Examples 4 Conversation Completion 5 English Dialogue Practice 6 Esl Dialogue Worksheet 7 Esl Dialogue Completion Also See Everyday Dialogue Examples English Levels For These Exercises Ilsc Level B4 I1 Canadian Language Benchmark 4 5 Common European Framework A2 B1Dialogue Sheets A New Tool For Retrospectives
A Dialogue Sheet Is Sheet Of Paper Eight Times Bigger Than A Regular A4 Or Letter Sized Sheet Over The Last Year Agile And Traditional Teams Have Taken To Using These Sheets For RetrospectivesCharacterization Lesson Ereading Worksheets
Ccss Ela Literacy Rl 8 3 Analyze How Particular Lines Of Dialogue Or Incidents In A Story Or Drama Propel The Action Reveal Aspects Of A Character Or Provoke A Decision Ccss Ela Literacy Rl 9 10 3 Analyze How Complex Characters E G Those With Multiple Or Conflicting Motivations Develop Over The Course Of A Text Interact With Other Characters And Advance The Plot Or Develop The
English Vocabulary Exercises Dialogues Sentences
Dialogues And Sentences Free English Vocabulary Exercises Elementary Level Learning English Online Vocabulary Quizzes
Verb To Be Worksheets Handouts To Print Printable
To Be Worksheets Handouts Printable Exercises And Grammar Lessons To Be Simple Present Resources For Esl
Characteristic Dialogue Worksheet. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Characteristic Dialogue Worksheet.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
Characteristic Dialogue Worksheet. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice. |
The correct paper position and tilt enables your child to handwrite comfortably while being able to see what they are writing. It also allows the non-writing hand to move the paper up the table so that the writing hand elbow can stay in the same position. The aim is to have the paper move up the table, rather than the writing hand moving down and eventually off the table.
As a class teacher I noticed that by the time children were about 8 or 9 years old it was very difficult to encourage them to move the paper up the table as they wrote. They would very often move their writing hand down the table, keeping the paper still, struggling to write properly as their hand hung over the edge of the table. Bad habits start early and can be difficult to change; good paper position and paper movement training at an early age can make such a difference to a child’s handwriting ability.
So why is paper position and tilt often ignored when teaching handwriting?
Maybe it is because experts disagree on what is the most appropriate paper tilt for right and left-handed writers. As there is no clear guidance people become uncomfortable about giving advice and so brush over the value of angling the paper for handwriting.
The most appropriate paper tilt angle is generally suggested as anywhere between 20 to 45 degrees anti-clockwise for right-handed writers and 30 to 45 degrees clockwise for left-handed writers.
For more tips and advice on developing a good paper tilt angle checkout this section of our website: http://bit.ly/2QSssWQ |
The history of the process of vaccination and the concept to vaccinate is 1000 of year old (>3000 years) that originated in the ancient Indian peninsula (Northern and Eastern India) as a practice of variolation/inoculation (the immunization of individuals from the materials taken from the infected person) by “Woodiah” (Oriya) Brahmans since “time immemorial” due to its unidentifiable time of origin . The evidence of protective measures of the process of variolation/inoculation is greatly described in the ancient Sanskrit text called Sacteya, mainly devoted to Dhanwantari, the physician . Thereafter that technique may have spread to the China due to the transfer of education and knowledge as Chinese scholars were visiting the world’s oldest Universities (Nalanda and Takshashila Vishwavidyalaya or University). Hence the technique of variolation moved from ancient India to the China around 1000 CE. Thereafter, the technique of variolation traveled to Africa and Turkey before its arrival to the European and American continents. Before the introduction of variolation there was no protective measure to counter the attack of smallpox and the observation was made by the Greek historian Thucydides (430 BC) that the attack of smallpox provides protection to the person surviving the attack of the smallpox. Evidence indicate the first existence of smallpox as a disease in ancient Egypt that reveled to ancient India through Egyptian traders visiting India during first millennia BC . From India then it traveled to China in first century AD and reached in Japan in the sixth century AD . It spread to Europe in the eleventh and twelfth centuries from there to North America (seventeenth century) and Australia (eighteenth century).
It is well established that smallpox is neither described in the Old and New Testaments nor in the classical Greek (including the Hippocratic and Galanus writings) and Roman literatures . It was Abu Bakr Muhammad Ibn Zakariya al-Razi (864–930 CE), a Tehran (Iran) born Muslim physician who first differentiated between smallpox and the measles based on their symptoms and clinical examination of the patients . However, the term smallpox is an English term for the disease, introduced first in India during the British rule and before it was known as the Masurika (For about 2000 years as mentioned in Charak and Sushruta Samhita before the Christian era) Basanta roga (Paproga, Sitalika, Sitala, Gunri, and Guli) or the spring disease in Eastern India. The concept of variolation or inoculation moved from India to the England in the early eighteenth century or 1721 by the British Lady Mary Wortley Montagu, who was living in Ottoman Empire (1716–1718) and communicated to her friend in Britain (Miss Sarah Chiswell, who died of smallpox 9 years later) about this technique by letters [2, 5]. Even in 1731 one British called Robert Coult in Bengal wrote a letter to Dr. Oliver Coult in England describing the procedure of variolation used in India to protect the local population from smallpox . Dr. Edward Ives (1773), a British naval surgeon, also observed the procedure of variolation as described by the Robert Coult on his visit to India (Bengal) in 1755 . Before the introduction of variolation/inoculation in England in the sixteenth century the burden of infectious diseases including smallpox, measles, whooping cough, dysentery, scarlet fever, influenza, and pneumonia accounted for the death of more than 30% children of age below 15 years as the record. The concept of variolation/inoculation was introduced in North America in 1721. By 1777, George Washington, ordered that all the soldiers and recruits of his army should be inoculated/variolated. Thus introduction of the concept of variolation was the first step towards the development of Edward Jenner’s cowpox/smallpox vaccine, modern day vaccines, and the introduction of the concept of vaccination to fight against infectious diseases.
Almost all text books of immunology and microbiology mention Edward Jenner as a father of immunology or vaccination due to his invention of the technique called vaccination as he injected the cowpox immunogenic material (pus) isolated from the hand of Sarah Nelmes (a female milkmaid, who got the cowpox from the infected cow called Blossom) to the both arms of James Phipps (a young boy of 8 years) on May 14, 1796 . However, the process of vaccination was developed almost 22 years before Edward Jenner by a farmer called Benjamin Jesty . No one knows Benjamin Jesty. Even the reports are available to indicate the existence of the concept/Sanskrit literature of cowpox vaccine to induce the immunity against smallpox in ancient India. It may be an injustice to the real discoverer of the concept of cowpox vaccination but the journey of vaccines and vaccination had started that never looked back. However the technique of variolation banned or made illegal in Britain in 1840 and the Jenner’s vaccination was promote and offered free of cost .
2. The development of vaccines from early nineteenth to twenty-first century
The early eighteenth century saw development of the vaccination procedure against small pox and its promotion in England by offering free vaccination there. Its spread all over the world revolutionized the field of vaccination against several other infectious diseases. However, the scientific origin of vaccines took at least a century following the discoveries made by Robert Koch and Louis Pasteur generating the concept there are pathogenic microbes/microorganisms causing infectious. Pasteur initiated attenuation of these pathogens in his laboratory by different methods including drying, heating, and exposing them to oxygen or passaging them in different animal hosts. The first live attenuated vaccine was developed for Rabies in 1885 and was used to immunize a boy named Josef Meister bitten by a rabid dog . Thereafter killed whole organisms were used to develop vaccine against cholera (1896), typhoid (1896), and plague (1897) . In second half of the twentieth century oral polio vaccine (OPV, 1963), measles (1963). Mumps (1967), and rubella (1969), all live attenuated vaccine came out along with several other vaccines (polio (injected, 1955), a killed whole organism vaccine, Anthrax vaccine (a protein-based vaccine, 1970), Hepatitis B surface antigen recombinant (a genetically engineered vaccine in 1986), and hepatitis A (a whole killed organism-based vaccine in 1996) were developed .
In twenty-first century, human papillomavirus (HPV) recombinant (quadrivalent in 2006), live attenuated vaccine Zoster in 2006, HPV recombinant (bivalent in 2009), pneumococcal conjugates (capsular polysaccharide conjugated with the career protein) (13-valent in 2010) are developed. Furthermore the live attenuated vaccine for the dengue virus infection is also developed by Sanofi Pasteur in 2016 and is called CYD-TDV that is sold under the brand name Dengvaxia [10, 11]. This live attenuated tetravalent chimeric vaccine is developed through the use of recombinant DNA technology by replacing the PrM (pre-membrane) and E (envelope) structural genes of the yellow fever attenuated 17D strain vaccine with those from four of the five dengue serotypes . However this dengue vaccine is recommended to the patients previously infected with dengue virus infection otherwise it may exert adverse effects during subsequent infections as their manufacture, Sanofi Pasteur. The vaccine is approved to use 11 countries including Mexico, Philippines, Brazil, El Salvador, Costa Rica, Paraguay, Guatemala, Peru, Indonesia, Thailand and Singapore [12, 13, 14, 15]. The dengue vaccine has shown consistence efficacy in healthy adults of Australia and ready to go in clinics . The vaccine has shown immunogenicity and safety during a 5-year study A most recent development in the field of vaccinology is the clinical trial for the live attenuated vaccine for Zika virus vaccine at the Johns Hopkins Bloomberg School of Public Health Centre for Immunization Research in Baltimore, Maryland, and at the Vaccine Testing Centre at the Larner College of Medicine at the University of Vermont in Burlington. The clinical trial for Zika virus is sponsored by National institute of allergy and infectious disease (NIAID), USA. In addition to the development of vaccines for infectious diseases these are also to develop against different cancers through targeting cancer-associated neoantigens.
The major aim of the book is to provide the readers an updated information on the field. For example, the first chapter of the book written by Dr. Raw Isaias has updated the progress of regarding the innovation and development of new vaccines and their candidates in developing countries like Brazil. In the second chapter, Dr. Dai has provided a great information regarding the different types of vaccines that will be informative for undergraduate and graduate students along with researchers. The third chapter of the book provides the regulatory journeys of applications with genetically modified viral vectors and novel vaccine candidates already reviewed by GMO (Genetically modified organisms) national competent authorities in Belgium and in Europe. This chapter will be crucial for the readers interested in regulatory affairs for the vaccines developed via GMOs. In fourth chapter, author (Leunda Amaya) has given an emphasis to target the vaccination strategies wildlife reservoirs including bats, boars, rodents, and other carnivorous animals serving as reservoirs for zoonotic viral infections (Rabies, Hanta virus infection, and Hepatitis E virus) in humans. The fifth and last chapter of the book written by Dr. Dulcilene describes the development of vaccinia virus vector to develop the vaccines against leishmaniasis that is major problem for developing countries of Asia, Africa and South America. Thus the book starts with the introductory chapter regarding the history and present status of the vaccines along with the other chapters contributed by the authors known in their field. Thus book is intended to provide the current and updated knowledge in the field of vaccinology. |
Our Sun is a very interesting space object. Held in the bowels of the star reactions cause the appearance of a huge number of phenomena. From the energy release in form of heat and light to the infamous solar radiation. However, observing the Sun, scientists were able to discover another interesting phenomenon. Namely, the solar wind. It was opened enough for a long time and its properties are even very successfully used for launching small spacecraft beyond the Solar system. However, only recently managed to figure out the impact force of the solar wind.
What is solar wind
The solar wind is a stream of ionized particles helium-hydrogen plasma generated by thermonuclear reactions in the Sun. These particles are “emitted” by the Sun and propagated throughout the Solar system. While solar wind is associated with many cosmic phenomena, including magnetic storms and auroras. And have you ever seen polar lights?
But back to the solar winds. They are of two types: fast (speeds of up to 1200 kilometers per second) and slow (about 300 kilometers per second). So, when fast solar wind overtakes slow, is quite a powerful shock wave that spreads throughout the solar system. And until recently to fix the power of this wave was not possible. But scientists at NASA have managed to do it thanks in a special way located skoplenii satellites.
As the measured power of the shock wave of solar wind
On the “route” of the shock wave was raspolozhenie 4 magnetospheric multiscale satellite. Each satellite was located at a distance of 20 kilometers from the next, forming a straight line in the course of shock wave propagation. This distance was enough to ensure that the devices located on the spacecraft, were able to capture the velocity of the particles.
The spacecraft has received an unprecedented accurate data on the movement of particles. In particular, the sensitive sensors on Board satellites were able to measure the position of particles in space with a periodicity of 0.3 milliseconds. It was enough so that, knowing the time and distance, which for him was the particle, vychislit its speed. ‐ the researchers write in their paper, opublikovannaya in the journal Geophysical Research.
During the measurements it was discovered two of the bunch of ions: the first came from a slow shock waves, solar wind, and the second — collision ion fast solar wind and slow. Scientists say that they managed to fix it “meeting” slow and fast solar wind — a great success and it will help much more to learn about the nature of the solar wind. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.